Case Study
Lighting Designer: Full-Resolution Render
Challenge
Traditionally, printing a plot is a useful tool in the lighting design process. Although one of the goal of LD is to reduce paper usage in an inherently wasteful industry, printing can help during rig days to keep the team on track, or as a backup reference while the show is shooting. The problem is, every device has a different size and shape, and no one screen is the correct resolution for printing a large plot (think 3×5 feet). So how do we obtain a large enough image?
The Team
Just me, doing design and code.
Research
In other plotting and drafting apps, rendering tends to be invisible. You just wait for an indeterminate amount of time, and then it’s done. You may or may not see a progress bar. In the previous version of LD, before GPU acceleration, users would see a spinning activity indicator and a “Rendering” message, but could not know how much time was left because of technical limitations. Sometimes this would take several minutes.
Pain Points
Overall, users’ main request was to see a little more about the process. They didn’t just want to know a percentage, they wanted some kind of feedback or reward. I also wanted to reflect the purpose of the full-res render, which was usually to print on a piece of paper. And I wanted to give my users the option to skip rendering if they didn’t want to take the time or didn’t feel the plot was ready yet, and also to stop a render in progress because they saw a change they wanted to make.
Feature Ideas
I found a solution that satisfied both my users’ request and the technical requirements of rendering on an iOS device, which can have limited memory. The outer boundaries of the plot could be automatically calculated or user-defined, so we could take those boundaries and use the size of the workspace view (which differs by device size) to divide the entire workspace into square pieces. Then, we’d move the workspace to each square, render it, and put them all together in a visible, overlaid composite view, so the user could see the progress; even a progress meter or activity indicator was made unnecessary, because the composite view acted as both, in addition to rewarding the user with what was essentially the final product of their hard work.
Prototyping
The first potential problem was that iOS devices don’t always have a ton of memory available, and because users define the size and complexity of their plots, there was no way for me to guarantee they wouldn’t create a plot that would generate too many large images. To solve this, I added the option to render the pieces and save them to the device’s Photos, and they could combine them afterwards in Photoshop. It’s not an ideal scenario, but after speaking to users, I decided it was an option that should exist. I did, however, hide the option by default, and only turn it on if the app actually receives a memory warning during rendering (or if the user turns it on).
The other hurdle I encountered was iOS device rotation; rendered images are automatically rotated to match whatever the device’s rotation is at the time, so if the user moves their device, that could throw off the composite image. I just ended up capturing the rotation at the beginning of the process and referring to that rotation for each image, rather than the current one. Other than that, it was just a matter of placing the composite view on the screen and allowing the user to stop the render process if they wanted to change something, rather than wait for it to finish.
Final Design
What I Learned
The final product is a pleasing reward for the user; they can sit back and admire their work, but it also looks interesting enough that they don’t want to leave the app. Each square piece renders in one second, so the process moves quickly.