Saturday, September 8, 2007


Everyscape, a startup based in Waltham, MA, is getting in on the rush to create a virtual version of the real world. Although the site will launch this fall under the shadow of mapping giant Google Earth, Everyscape's cofounders say that users will find the company's look and feel quite different. "We're working on a human experience," CEO Jim Schoonmaker says. "Google has built a superhuman experience."

Everyscape's demo opens in the middle of San Francisco's Union Square, below the Dewey Monument. Users can choose the auto-drive mode, which gives a virtual tour of the area's sights and shops, or they can explore on their own. Auto drive orients a user by showing her the general layout of Union Square before taking her into Harry Dentin's Starlight Lounge and bringing her out again for a dizzy, swirling look at the night sky above the Dewey Monument.

The site is designed to give a full immersive experience. A user should be able to tour Union Square virtually, Schoonmaker says, and then feel comfortable navigating it in real life.

Google Earth, in contrast, opens with a satellite's view of the earth resting in space. From there, users can fly down to explore chosen terrain or look out at the stars. While many areas are created with flat satellite photos, some locations include links to street-level photos taken by users. Images showing a 3-D view of certain buildings can also be layered onto the map, using a special programming language called KML.

In Everyscape, building interiors are constructed the same way as the rest of the environment: by stitching together a series of panoramic photographs taken by company photographers or contributed by users. Within each photograph, a user can swivel through a full sphere of motion. To move users from within one panoramic photograph to the next, Everyscape's servers estimate the locations of the cameras in each photograph and use that information to build sparse 3-D geometry that forms the building blocks for an animated 3-D transition. Everyscape CTO and founder Mok Oh says that the transition works because it simulates people's real-life attitude toward moving from place to place. "Getting there is not what you want," he says. "Being there is what you want."

Derek Hoiem, a researcher at the University of Illinois at Urbana-Champaign who designed the technology behind the 3-D site Fotowoosh, says that 3-D immersive sites are popular now because of their appeal to users। "When you're able to control the environment, it feels more lifelike," he notes. While Hoiem says that Everyscape's technology gives a good approximation of motion, he also says that he would like to see greater freedom of movement, rather than just swiveling and transitioning.

Ironically, the original version of Everyscape's technology, used by the first company that Oh founded, Mok3, had the type of capability that's on Hoiem's wish list. Mok3 built software that can use panoramic photographs to generate environments interactive enough for a game engine, and that looks much like a walk-through captured on video. Infinite Corridor animation.) In search of a business model, Oh scaled back the technology to make it transfer more easily over the Internet and founded SuperTour Travel, which created interactive environments to show off high-end hotels and other travel destinations to potential customers. With Everyscape, Oh hopes to use what he learned with SuperTour to virtually reproduce the entire world.

In a business model based in part on SuperTour's, Everyscape plans to make money by helping businesses build their interiors for a fee. Schoonmaker says that he expects shopkeepers to understand the need to virtually display their physical inventory and store layout. "That's where all their money went," he says. "That's what they need to show you." In the future, Schoonmaker hopes to add more interactive features to help businesses function virtually. Future additions might give users the ability to buy merchandise inside a store with the click of a mouse, or might add a virtual maître d' that could help visitors make dinner reservations at a restaurant and recommend items on the menu.

Everyscape plans to launch this fall with environments for parts of San Francisco, Boston, and New York. Other future plans include adding user-controlled avatars and features for mobile devices.


Computer-generated effects are becoming increasingly more realistic on the big screen, but these animations generally take hours to render. Now, Adobe Systems, the company famous for tools like Photoshop and Acrobat Reader, is developing software that could bring the power of a Hollywood animation studio to the average computer and let users render high-quality graphics in real time. Such software could be useful for displaying ever-more-realistic computer games on PCs and for allowing the average computer user to design complex and lifelike animations.

Adobe is focusing its efforts on ray tracing, a rendering technique that considers the behavior of light as it bounces off objects. Since it takes so long to render, ray tracing is typically used for precomputed effects that are added to films, computer games, and even still pictures before they reach the consumer, explains Gavin Miller, senior principal scientist at Adobe.

With the rise of multicore computing, Miller says, more consumers have machines with the capability to compute ray-tracing algorithms. The challenge now, he says, is to find the best way to divvy up the graphics processes within general microprocessors. "Adobe's research goal is to discover the algorithms that enhance ray-tracing performance and make it accessible to consumers in near real-time form," Miller says.

Consumer computers and video-game consoles compute graphics using an approach called rasterization, explains John Hart, a professor of computer science at the University of Illinois at Urbana-Champaign. Rasterization renders a scene by generating only those pixels that will be visible to a viewer. This process is fast, but it doesn't allow for much realism, explains Hart. "Rasterization is limited in the kinds of visual effects it can produce, and has to be extensively customized just to be able to approximate the appearance of complicated reflective and translucent objects that ray tracing handles nicely." For instance, in real life, if a light is shining at the side of a car, some of that illumination could reflect off metal in the undercarriage, and this would create a reflection on the ground that's visible to a viewer who's looking at the car from above. Rasterization would ignore the pixels that make up the undercarriage, however, and the reflection would be lost.

Ray tracing takes a fundamentally different approach from rasterization, explains Miller। "Rather than converting each object into its pixel representation, it takes all of the geometry in the scene and stores it in a highly specialized database," he says. This database is designed around performing the following fundamental query: given a ray of light, what points on a surface does it collide with first? By following a ray of light as it bounces around an entire scene, designers can capture subtle lighting cues, such as the bending of light through water or glass, or the multiple reflections and shadows cast by shiny three-dimensional objects such as an engine or a car.

Essentially, then, ray tracing tries to find the right information in a database as quickly as possible. This isn't a problem for rasterization, says Miller. Usually, the rendering process is straightforward, and data is cached and ready to go when the processor needs to use it. With ray tracing, however, the brightness of any given point on a surface could have been created from multiple bounces of a light ray, and data about each bounce of light tends to be stored in a separate location in the database. "This is a nightmare scenario for the caching strategy built into microprocessors, since each read to memory is in an entirely different location," says Miller.

He explains that his team is exploring various approaches to making these database queries more efficient. Previous research has produced algorithms that bundle certain types of data together to simplify the querying process. For instance, bundles of data can include information that represents rays of light that start from roughly the same location, or rays that head in nearly the same direction. Adobe is not releasing the details of its approach, although Miller says that his team is trying to find the most efficient combination of database-management approaches. Once the researchers develop software that can effectively manage the memory of multicore computers, then ray-tracing algorithms can be rendered at full speed, he says.

"Adobe makes software that improves a user's ability to create and communicate visually," says Hart of the University of Illinois. "Software like Photoshop provides methods for processing photographs, but by adding ray tracking, users will have the ability to create photorealistic images of things they didn't actually photograph." One of the biggest obstacles at this point, he says, is making the system work fast enough so that a user can run a ray-tracing program interactively.

The current ray-tracing approach alone won't solve all the problems that computer-graphics researchers are tackling, Hart adds. It's still impossible to perfectly simulate the human face. "This is an elusive goal," he says, "because as we get more realistic ... subtle errors become more noticeable and, in fact, more creepy. Once we get faces right, we will need high-quality methods like ray tracing to render them, and we'll want it in real time."

The system is still just a research project, and the company doesn't provide a timeline for when it might make it to consumers, but technology on all fronts, including advances in multicore architecture, is advancing rapidly. Miller suspects that consumers will start to see real-time ray tracing in products within the next five years.