The social graph--an image of a person's connections to friends, family, and colleagues--has been in the news since Facebook founder Mark Zuckerberg suggested earlier this year that this information could be invaluable to businesses looking to spread their products to a large audience. (See "Building onto Facebook's Platform.") Now IBM is exploring how different visualizations of the social graph could be useful within businesses, as a way of helping people work more efficiently and make better connections. Last week the company, which launched its social-software platform, Lotus Connections, earlier this year, released a tool called Atlas that uses the data in Connections to help users analyze their relationships with business contacts.
"As people start using social software and expanding their professional networks, there's actually a lot of value in the relationships that you can determine from statistical analysis of that data," says Chris Lamb, senior product manager for Connections.
Atlas and other Connections tools are based on IBM research into social computing that began in 2002, says product manager Suzanne Minassian. Aimed at helping workers organize around common goals, the research focused on adapting popular social tools such as bookmarking and blogging for business purposes, and integrating them with each other. The larger Connections suite allows workers to create profiles, blog, form communities around common interests, share bookmarks, and plan and track projects as a group. Each component of Connections is integrated with the others, so a user can move seamlessly between tools. IBM has been using features included in Connections for several years internally, and Minassian says that there are more than 400,000 profiles in the system.
Atlas's most powerful features rely on the data available through Connections, Lamb explains. It collects information about professional relationships based not only on job descriptions and information readily available through the corporate directory, but also through blog tags, bookmarks, and group membership. Atlas can be configured to look at e-mail and instant-message patterns, and to weigh different types of information more or less heavily. The result, Lamb says, is a set of tools that go beyond the simple networks that are clear from a corporation's structure.
Atlas's four features are Find, Reach, Net, and My Net. Find and Reach are both focused on finding experts in particular fields. Through Find, a user enters search terms and receives a list of experts, ranked based on information gleaned from social data, the level of the expert's activity in the community, and any connections he may have to trusted associates of the user. Reach then helps the user plot the shortest path to make the connection, suggesting people the user already knows who could put him in touch with an expert. Net and My Net are primarily meant to help people analyze their existing networks. Net shows patterns of relationships within particular topic areas at a company-wide level. For example, it might analyze data on people interested in social computing and produce a map of how those people connect with each other through blog readership and community involvement. My Net allows individuals to analyze their own networks, showing them who they are connected to and how frequently they stay in touch with those people.
Lamb says that executives might want to use Atlas's Net component to see, for example, how well two companies are integrating after a merger. Alternatively, he says, a salesperson might want to use My Net to make sure that she has good connections across the company to people familiar with the products it sells.
Rob Koplowitz, an analyst with Forrester Research, says that employing social-computing features within a business is as important as using these tools for informal relationships. One key feature of social software designed particularly for businesses is its ability to protect sensitive data, he says: "I'm able to generate relationships and content that might not be appropriate outside of my enterprise. In the consumer space, you assume that the information is public, and that's what you have access to." But with software designed for large corporations, he says companies can assume that access is more secure, and they have the option to make more information available. While Koplowitz thinks that companies will have to be careful about how they choose to configure Atlas and what information they choose to use to build the social graphs, he also says that Connections' integration of social tools is potentially very useful, and something that might eventually become part of more casual networking tools.
Atlas is now being sold through IBM Software Services for Lotus, in part because it requires configuration based on how a business wants to access and analyze information.
Friday, December 28, 2007
IBM Atlas
Posted by
Sam
at
12:04 PM
0
comments
Thursday, December 20, 2007
Intel introduced one of the smallest flash-memory-based hard drives on the market. The chip, also known as a solid-state hard drive, competes with similar chips from Samsung, which store data in gadgets such as Apple's iPod nano and iPhone. But the Intel chip comes with a standard electronics controller built in, which makes it easy and inexpensive to combine multiple chips into a single, higher-capacity hard drive.
The move highlights Intel's effort to establish itself as a leader in flash-memory chips and to make them a replacement for the bulky and conventional magnetic hard drives that store data on most of the world's computers. Smart phones and so-called ultramobile computers will require some kind of dense, durable storage system in order to bring the power of desktop computers to handheld devices.
Posted by
Sam
at
12:09 PM
0
comments
Saturday, December 8, 2007
A new wireless cardiac "patch" could allow doctors to continuously monitor patients' hearts and record electrocardiograms (EKGs) while they are on the go. Such highly portable continuous monitors could help doctors treat cardiac patients, and they may soon become crucial tools in diagnosing conditions in otherwise healthy people, say the device's developers.
Developed by researchers at the Interuniversity Micro-Electronic Centre (IMEC), an independent nanotechnology research institute in Eindhoven, the Netherlands, the flexible stick-on device is a variation of a Holter monitor, a portable EKG tool currently used by cardiologists to help assess and diagnose their patients. But Holter monitors require a number of electrodes to be stuck to the body and connected, via a tangle of wires, to a bulky recording device worn at the hip.
In contrast, the new device just sticks onto the patient's chest and wirelessly sends electrical signals detected from the heart to a credit-card-like receiver. These signals can be analyzed and used to sound an alarm as an early warning when dangerous heart rhythms, or arrhythmias, are detected, says Bert Gyselinckx, the director of IMEC's Wireless Autonomous Transducer Solutions program. For example, the device could be used to alert emergency services to problems suffered by elderly cardiac patients who live alone.
The new device consists of a flexible circuit board just 60 millimeters long and 20 millimeters wide that contains all the circuitry to detect and transmit the EKG signal up to 10 meters. The flexible board slips into a Lycra patch with three sticky points of contact that act as the EKG electrodes. Short wires within the pouch connect the contact points to the circuit board via snap-on sockets. "This makes it easier to attach the electrodes," says Gyselinckx.
The signal is sent to the receiver using an off-the-shelf wireless transmitter, which uses technology similar to Bluetooth but at much lower power, says Gyselinckx. The receiver is a smart card--a pocket-sized card with an integrated circuit embedded in it--that also incorporates a thin battery. "It looks and feels like a credit card," Gyselinckx says. The card can store the EKG data on an embedded two-gigabyte flash-memory device, or it can be hooked up to a handheld computer or cell phone to relay the data to a clinic.
There is a general trend to make heart-monitoring devices wireless because they are so much easier to use, says Mike Kingsley, director of exercise-physiology laboratories at Swansea University, in Wales.
Already, consumer products are available that monitor the heart and send the signal wirelessly to a watch. But these products only detect heart rate, in terms of beats per minute, says Kingsley. "An EKG gives you a lot more information about the way the electrical current is traveling through the heart," he says. A cardiologist can use this data to determine the morphology and behavior of the heart, both of which are vital to making a diagnosis.
Many hospitals have started installing wireless EKG patient-tracking systems, says Gyselinckx, as a way of keeping tabs on their patients and locating them if they get into trouble. But such systems amount to little more than Holter monitors hooked up to a central hospital tracking system that monitors the patients' whereabouts and EKGs.
The IMEC device does have limitations: in its current form, it can't record as much of the heart's electrical activity as a clinical EKG can. "It doesn't give you an overall picture of the heart--only a snapshot," Kingsley says.
Even so, it is still very useful because it allows all arrhythmic events to be detected, says Hans Stromeyer, chief medical officer of Monebo, in Austin, TX, which has developed a wireless EKG device that is worn like a belt. "And continuous monitoring can pick up events that the patient will not be aware of," he says. This has huge potential in preventative medicine because it can help doctors detect and treat serious heart conditions before they progress and cause irreparable damage.
Indeed, the IMEC team is developing the heart patch as part of a larger project, called Human++, aimed at designing telemedicine technologies for preventative health. Continuously monitoring the vital signs of otherwise healthy people in the general population could make it possible for doctors to preempt a variety of serious illnesses through early detection, Gyselinckx says.
Wireless home-based monitoring and diagnosis is already beginning to happen, says Stromeyer. It has demonstrated its usefulness in long-term recovery and is much cheaper than hospital rehabilitation.
There is also a lot of interest in using portable heart monitors to assist in drug trials. This is because one section of the EKG trace, known as the QT segment, has been shown to be a good indicator of changes in heart activity caused by drug toxicity, says Stromeyer. Highly portable monitors such as the IMEC device could be particularly useful in such an application.
But for now the IMEC team is working to enable the device to record as much data as a clinical EKG can. The team is also working to make the patch more pliable with a combination of flexible organic electronics and thin-film silicon electronics, with the aim of licensing the technology.
Posted by
Sam
at
1:58 PM
0
comments
Increasingly, people connect to the Internet through mobile phones, video-game consoles, or televisions. The problem is that a lot of Internet content isn't available for all of these devices, and many websites crash when loaded on a mobile device. Tim Berners-Lee, director of the World Wide Web Consortium (W3C) and father of the Internet, worries that this is effectively cutting some people off from the information that is freely shared on the Internet. Speaking at the Mobile Internet World conference in Boston earlier this week, Berners-Lee said that the W3C is working on defining a set of standards that developers can use to build websites that work with mobile devices, as well as with desktop computers, so that the mobile Web doesn't break apart from the World Wide Web. This week, the W3C also launched a new tool that developers can use to test their websites for compatibility with mobile devices.
The overarching goal of the initiative, according to Berners-Lee, is to keep content available regardless of the devices available to a person. "I like being able to choose my hardware separately from choosing my software, and separately from choosing my content," Berners-Lee said at the conference. There needs to be just one Web, he explained, and it needs to work on phones.
Many websites are far from Berners-Lee's vision. Some developers don't have websites that work with mobile devices and don't make mobile versions of their sites, seeing this as an added technical headache. For developers who do want their websites to be available everywhere, a common practice is to build special versions of sites for mobile devices, with pared-down features and, sometimes, content.
In some parts of the world, the mobile phone is the primary way that people access the Internet, and content should be available to those people as much as it is to people using a desktop computer. The system doesn't work well for those in wealthier nations, either. Users with devices such as the iPhone want to be able to access sites from their mobile device at the full capability that the iPhone has, says Matt Womer, the W3C's mobile-Web-initiative lead for North America. Users don't want to see a pared-down site.
On the other hand, Womer notes that mobile-device users shouldn't be forced to download large images or be redirected to several different pages, since users pay by the kilobyte.
Mobile sites can also be hard to find, because there are no standards for creating domain names. Some sites use the prefix "mobile" instead of "www," for example, while other sites use the prefix "wap." Womer says that the result can be confusing for users, who shouldn't have to know to look for special prefixes. "I think in the end, what's best for the user is one URL that works everywhere," he says.
The W3C's current suggestion for people writing Web pages, Womer says, is to separate information about how to present content from the content itself. The content can be described through hypertext markup language (HTML), the language traditionally used to describe Web pages, while the presentation can be handled with separate style sheets. Womer says that the W3C is collecting information about devices so that developers can tailor the presentation to the capabilities of the hardware.
The W3C's new tool, called the mobileOK checker, will look over code to see how well it follows the W3C's guidelines. Womer says that the tool won't be able to assess everything--some things, such as determining the readability of text against a background color, require human judgment--but it will consider a great deal of variables and provide specific instructions for what needs to be fixed.
"The importance of standards cannot be overestimated," says Jon von Tetzchner, CEO of Opera Software, who is working with the W3C's mobile-Web initiative. In addition to making browsers for desktop computers and mobile devices, Opera makes browsers for the Nintendo Wii and other game systems. "To deal with the complexity that is out there, there can only be one Web," von Tetzchner says.
Posted by
Sam
at
1:56 PM
0
comments
The new Amazon Kindle e-reader, unveiled yesterday, is the latest in a line of ever-improving black-and-white e-paper displays that don't use much power and are bright even in daylight; they more closely reproduce conventional paper and ink than do backlit displays. But bigger technology leaps are imminent. E-paper pioneer E Ink--the company whose technology underpins the Amazon gadget's display--is prototyping versions of the electronic ink that are bright enough to support filters for vivid color displays, and that have a fast-enough refresh rate to render video.
Add it all up, and it represents an emerging trifecta of color, video, and flexibility set to transform a display technology once seen as suited only for rigid black-and-white e-readers like the Kindle and the Sony Reader, and other niche applications like train-station schedule displays that don't need to change quickly. "This latest thing they've done with the video is a key milestone in the history of e-paper technology development," says Gregory Raupp, director of the Flexible Display Center at Arizona State University. "Until this point, you have been limited to static image applications."
E Ink's basic technology uses a layer of microcapsules filled with flecks of submicrometer black and white pigment chips in a clear liquid. The white chips can be positively charged, the black chips negatively charged. Above this layer is a transparent electrode; at the base is another electrode. A positive charge on the bottom electrode pushes the white chips to the surface, making the screen white. A negative charge pushes the black chips up, rendering words and images.
But the basic technology only produces a black-and-white image. So, E Ink has been refining the ingredients, the electronics, and the mechanics of that process. For example, in recent months the company has developed ultrabright inks that reflect 47 percent of ambient light--a significant improvement over the 35 to 40 percent in existing E Ink black-and-white displays. Higher reflectivity versions should go into commercial products, such as the Sony Reader, in about two years.
This higher brightness makes color displays possible. E Ink uses transparent red, green, or blue filters affixed above the picture elements. In essence, software controls groups of microcapsules sitting below filters of particular hues, and it only turns the microcapsules white when those hues are sought. The E Ink filters are custom-made by a partner, Toppan Printing of Tokyo, to work well with the specific shades, brightness, and reflectivity of the E Ink technology. The first color experimentation began several years ago, but it has been steadily improving in brightness and contrast, says Michael McCreary, E Ink's vice president of research and advanced development. He offered no estimate for a commercialization date.
In another set of advances, tweaks to the E Ink particles and their polymer coatings, and to the chemistry of solution inside the microcapsules, have helped improve the speed at which the particles can move. McCreary says that for years, conventional wisdom held that E Ink technology could never be made video ready, because particles had to be moved through a liquid. But E Ink has done it, thanks to polymer particle coatings and "special stuff in the clear liquid," McCreary says in the company's Cambridge, MA, headquarters, two prototypes show the payoff. One is an e-reader display in bright, vivid color. Touch a button, and an image of a bunch of flowers appears; bring the display outside, and it shines brighter because it is reflecting ambient light. (As with black-and-white e-paper, until a user changes that image, the unit consumes virtually no power.) The other prototype, a six-inch display hooked up to a computer, showed a video clip from the animated movie Cars. It was a bit grainy but was switching frames 30 times per second. Two years ago, the switching time in products with E Ink technology was just one frame per second.
While the video version is still several years from market, "this was a landmark research advance in the history of e-paper," says Russ Wilcox, E Ink's CEO. Invoking the long-held dream for e-paper--that it can be an electronic replacement for real newsprint--he added, "You can imagine a USA Today weather chart where clouds are actually moving."
E Ink is working with several leading display makers to develop flexible transistors that will create E Ink and other color displays that are bendable and even rollable. LG Philips recently announced the world's first 14.1-inch flexible color e-paper display using E Ink technology. The color version uses a substrate that arranges thin-film transistors on metal foil rather than on glass. And last month, Samsung used E Ink technology to set a new world record in terms of the resolution of a large flexible color display. (Samsung's 14.3-inch screen has a 1,500-by-2,120-pixel resolution.) No commercialization date has been announced for these technologies.
Other companies are also making advances in e-paper. One of them, San Diego's Qualcomm MEMS Technologies, has developed a MEMS-based version that can produce video-ready refresh rates and will appear in monochrome and bicolor displays in the next year or so. (See "E-Paper Displays Video.") But E Ink is generally acknowledged to have the best technology in terms of simulating the look of paper, says Raupp, whose research lab has partnerships with 16 display makers, including both E Ink and Qualcomm. "Put the two side by side--which one looks like paper? There would be no contest," Raupp says of E Ink and Qualcomm. The move into video and color "expands the application space" and makes E Ink a leading candidate to become a fixture in flexible displays, he adds.
Posted by
Sam
at
1:49 PM
0
comments
Thursday, November 8, 2007
Squat orange robots and a set of adaptive algorithms are making it possible to deliver online orders faster. The system, so far installed in two giant Staples warehouses, allows workers to fill two to three times as many orders as they could with conventional methods. The startup that developed the robots and software, Kiva Systems, based in Woburn, MA, announced yesterday that it is rolling out a third system, for the pharmacy giant Walgreens.
Kiva Systems' CEO and founder, Mick Mountz, likens the system to random access memory chips. The warehouse is arranged in a memory-chip-like grid composed of rows and columns of freestanding shelves. The grid gives robots access to any product in the warehouse at any time. The robots serve two basic functions. First, they deliver empty warehouse shelving units to workers who stock them. The workers might stock one unit with a mix of paper, various types of pens, and computer software, all compiled from large pallets that had been delivered to the warehouse. Then, when a consumer submits an order, robots deliver the relevant shelving units to workers who pack the requested items in a box and ship them off. "We turn the whole building into a random access, dynamic storage and retrieval system," Mountz says.
If a consumer orders an item at 2 P.M. on a Thursday, he says, at 2:01, a robot can be delivering that order to a worker to pack. If an order has multiple items, robots will line up for workers as fast as the workers can pack the items. Once the items are packed, robots can pick up the boxes, storing them temporarily or delivering them to the appropriate delivery truck.
Mountz says that the system allows workers to fill orders much faster than conventional systems do because the robots can work in parallel, allowing dozens of workers to fill dozens of orders simultaneously. In one type of conventional system, workers have to walk from shelf to shelf to fill orders, and all that walking takes time. With the Kiva system, several robots can be dispatched to collect all the items in an order at once. The robotic system is also more efficient than conveyor-based systems, in which elaborate conveyors and chutes send boxes past workers who pack them as they go by. In such a system, the slowest part of the line, which could be the slowest worker, limits the overall speed. With the help of the robots, each worker fills an entire order, so one worker doesn't slow everyone else down.
The robotic system is also faster because the entire warehouse can adapt, in real time, to changes in demand. Robots move shelves with popular items closer to the workers, where the shelves can be quickly retrieved. Items that aren't selling are gradually moved farther away. More-conventional warehouses can also be adaptive, Mountz says, but it takes much longer to rearrange items.
Posted by
Sam
at
10:46 AM
0
comments
Shopping online from the comfort of your desk chair is certainly easier than traveling to a store and lugging home heavy bags. But for all its effortlessness, online shopping falls short when it comes to finding something you weren't looking for but would like to buy. Recommendation systems, such as those built by Amazon, try to uncover these gems, but many fall short of appropriately catering to an individual shopper.
Now a Seattle-based startup called Cleverset thinks it has the secret to the next-generation recommendation system: a type of computer modeling found mainly in artificial-intelligence research labs. Cleverset's system weighs the importance of the relationship among individual shoppers, their behavior on the site, the behavior of similar shoppers, and external factors such as seasons, holidays, and events like the Super Bowl. Using these ever-changing relationships, Cleverset's system serves up products that are statistically likely to match what the customer will find interesting.
Online retailers can have millions of products in their warehouses, but a consumer only has a limited view of what's available when she comes to a site, says Bruce D'Ambrosio, Cleverset's founder and a professor of electrical engineering and computer science at Oregon State University. "You've got gigabytes of stuff behind your website," he says, "and you only have a megapixel of display." The challenge for most online stores is finding the best products and information to show in that tiny space on the screen.
Recommendation systems have been around for nearly as long as online retail sites have existed, and each varies slightly in its approach. Many systems just match products to people by looking at the products that others have bought. For instance, if you are looking at a blender, and people who bought the blender also bought a toaster oven, then the system would suggest a toaster oven to you. The problem here, says D'Ambrosio, is that all this analysis of purchases happens offline, and the system has no awareness of what a consumer is trying to accomplish at that specific point in time.
Cleverset uses an approach called statistical relational modeling, developed in the past decade, in which each piece of information in a data set is linked together based on its relationship to every other piece of information. This contrasts with the previous view of looking at data as if in an Excel spreadsheet, where everything carries an equal weight.
Statistical relational modeling has, for the most part, stayed cooped up in research labs. It's been used to develop technologies such as natural-language processing (to extract relationships from text), bioinformatics (to find relationships between genes and proteins), and computer vision (to help robots see scenes as collections of related items). Daphne Koller, a professor of computer science at Stanford University, says that statistical relational modeling is good in these instances because there is a lot of uncertainly within the data sets. Relationships can be established, she says, and then statistics must be used to determine the likelihood and importance of each relationship.
In the case of Cleverset, the system starts collecting data and forming relationships within that data the instant a person hits the retailer's website. D'Ambrosio says that, as with many site analytics tools, Cleverset relies on little programs that retailers install on their websites. These programs can track the previous site that the consumer viewed, and if it was a search engine, it logs the keywords used. As the user clicks on items, Cleverset's system creates a more detailed view of his interests and compares it with those of other people using the site. What sets the system apart is that it organizes customers' behaviors into a data set that includes information on how various behaviors relate to each other. The system also pulls in outside information, such as whether or not a person is shopping during a Super Bowl commercial break.
While Cleverset was founded in 2000, its technology has only recently reached the point at which the results are good enough to make a significant difference in the competitive e-commerce industry. D'Ambrosio says that sites that use Cleverset--which include Overstock.com and Wine Enthusiast--experience, on average, a 20 percent increase in revenue per customer. The company is also earning some media buzz: when Cleverset presented its technology at the Web 2.0 Summit in San Francisco last month, it came away with two audience-voted awards: "Best in Show" and "Most Likely to Exit First."
Stanford's Koller says that a recommendation system such as Cleverset's "fit neatly into the framework of statistical relational modeling because it's all about relationships." She argues, though, that it might be impossible to make a single system fit every kind of e-commerce site. For instance, Netflix, which launched a competition to build a better system, uses different methods than a site that recommends clothes. (See "The $1 Million Netflix Challenge.")
Cleverset works with each site to tailor its technology appropriately, says D'Ambrosio, which will be important, as the company soon plans to launch with a number of undisclosed "very large retailers" that bring in $100 million or more annually. D'Ambrosio adds that the technology is still improving, and he and his team see future versions of their system including even more input from merchandisers about how their customers use their site.
Posted by
Sam
at
10:43 AM
0
comments
Saturday, September 8, 2007
Everyscape, a startup based in Waltham, MA, is getting in on the rush to create a virtual version of the real world. Although the site will launch this fall under the shadow of mapping giant Google Earth, Everyscape's cofounders say that users will find the company's look and feel quite different. "We're working on a human experience," CEO Jim Schoonmaker says. "Google has built a superhuman experience."
Everyscape's demo opens in the middle of San Francisco's Union Square, below the Dewey Monument. Users can choose the auto-drive mode, which gives a virtual tour of the area's sights and shops, or they can explore on their own. Auto drive orients a user by showing her the general layout of Union Square before taking her into Harry Dentin's Starlight Lounge and bringing her out again for a dizzy, swirling look at the night sky above the Dewey Monument.
The site is designed to give a full immersive experience. A user should be able to tour Union Square virtually, Schoonmaker says, and then feel comfortable navigating it in real life.
Google Earth, in contrast, opens with a satellite's view of the earth resting in space. From there, users can fly down to explore chosen terrain or look out at the stars. While many areas are created with flat satellite photos, some locations include links to street-level photos taken by users. Images showing a 3-D view of certain buildings can also be layered onto the map, using a special programming language called KML.
In Everyscape, building interiors are constructed the same way as the rest of the environment: by stitching together a series of panoramic photographs taken by company photographers or contributed by users. Within each photograph, a user can swivel through a full sphere of motion. To move users from within one panoramic photograph to the next, Everyscape's servers estimate the locations of the cameras in each photograph and use that information to build sparse 3-D geometry that forms the building blocks for an animated 3-D transition. Everyscape CTO and founder Mok Oh says that the transition works because it simulates people's real-life attitude toward moving from place to place. "Getting there is not what you want," he says. "Being there is what you want."
Derek Hoiem, a researcher at the University of Illinois at Urbana-Champaign who designed the technology behind the 3-D site Fotowoosh, says that 3-D immersive sites are popular now because of their appeal to users। "When you're able to control the environment, it feels more lifelike," he notes. While Hoiem says that Everyscape's technology gives a good approximation of motion, he also says that he would like to see greater freedom of movement, rather than just swiveling and transitioning.
Ironically, the original version of Everyscape's technology, used by the first company that Oh founded, Mok3, had the type of capability that's on Hoiem's wish list. Mok3 built software that can use panoramic photographs to generate environments interactive enough for a game engine, and that looks much like a walk-through captured on video. Infinite Corridor animation.) In search of a business model, Oh scaled back the technology to make it transfer more easily over the Internet and founded SuperTour Travel, which created interactive environments to show off high-end hotels and other travel destinations to potential customers. With Everyscape, Oh hopes to use what he learned with SuperTour to virtually reproduce the entire world.
In a business model based in part on SuperTour's, Everyscape plans to make money by helping businesses build their interiors for a fee. Schoonmaker says that he expects shopkeepers to understand the need to virtually display their physical inventory and store layout. "That's where all their money went," he says. "That's what they need to show you." In the future, Schoonmaker hopes to add more interactive features to help businesses function virtually. Future additions might give users the ability to buy merchandise inside a store with the click of a mouse, or might add a virtual maître d' that could help visitors make dinner reservations at a restaurant and recommend items on the menu.
Everyscape plans to launch this fall with environments for parts of San Francisco, Boston, and New York. Other future plans include adding user-controlled avatars and features for mobile devices.
Posted by
Sam
at
9:44 AM
0
comments
Computer-generated effects are becoming increasingly more realistic on the big screen, but these animations generally take hours to render. Now, Adobe Systems, the company famous for tools like Photoshop and Acrobat Reader, is developing software that could bring the power of a Hollywood animation studio to the average computer and let users render high-quality graphics in real time. Such software could be useful for displaying ever-more-realistic computer games on PCs and for allowing the average computer user to design complex and lifelike animations.
Adobe is focusing its efforts on ray tracing, a rendering technique that considers the behavior of light as it bounces off objects. Since it takes so long to render, ray tracing is typically used for precomputed effects that are added to films, computer games, and even still pictures before they reach the consumer, explains Gavin Miller, senior principal scientist at Adobe.
With the rise of multicore computing, Miller says, more consumers have machines with the capability to compute ray-tracing algorithms. The challenge now, he says, is to find the best way to divvy up the graphics processes within general microprocessors. "Adobe's research goal is to discover the algorithms that enhance ray-tracing performance and make it accessible to consumers in near real-time form," Miller says.
Consumer computers and video-game consoles compute graphics using an approach called rasterization, explains John Hart, a professor of computer science at the University of Illinois at Urbana-Champaign. Rasterization renders a scene by generating only those pixels that will be visible to a viewer. This process is fast, but it doesn't allow for much realism, explains Hart. "Rasterization is limited in the kinds of visual effects it can produce, and has to be extensively customized just to be able to approximate the appearance of complicated reflective and translucent objects that ray tracing handles nicely." For instance, in real life, if a light is shining at the side of a car, some of that illumination could reflect off metal in the undercarriage, and this would create a reflection on the ground that's visible to a viewer who's looking at the car from above. Rasterization would ignore the pixels that make up the undercarriage, however, and the reflection would be lost.
Ray tracing takes a fundamentally different approach from rasterization, explains Miller। "Rather than converting each object into its pixel representation, it takes all of the geometry in the scene and stores it in a highly specialized database," he says. This database is designed around performing the following fundamental query: given a ray of light, what points on a surface does it collide with first? By following a ray of light as it bounces around an entire scene, designers can capture subtle lighting cues, such as the bending of light through water or glass, or the multiple reflections and shadows cast by shiny three-dimensional objects such as an engine or a car.
Essentially, then, ray tracing tries to find the right information in a database as quickly as possible. This isn't a problem for rasterization, says Miller. Usually, the rendering process is straightforward, and data is cached and ready to go when the processor needs to use it. With ray tracing, however, the brightness of any given point on a surface could have been created from multiple bounces of a light ray, and data about each bounce of light tends to be stored in a separate location in the database. "This is a nightmare scenario for the caching strategy built into microprocessors, since each read to memory is in an entirely different location," says Miller.
He explains that his team is exploring various approaches to making these database queries more efficient. Previous research has produced algorithms that bundle certain types of data together to simplify the querying process. For instance, bundles of data can include information that represents rays of light that start from roughly the same location, or rays that head in nearly the same direction. Adobe is not releasing the details of its approach, although Miller says that his team is trying to find the most efficient combination of database-management approaches. Once the researchers develop software that can effectively manage the memory of multicore computers, then ray-tracing algorithms can be rendered at full speed, he says.
"Adobe makes software that improves a user's ability to create and communicate visually," says Hart of the University of Illinois. "Software like Photoshop provides methods for processing photographs, but by adding ray tracking, users will have the ability to create photorealistic images of things they didn't actually photograph." One of the biggest obstacles at this point, he says, is making the system work fast enough so that a user can run a ray-tracing program interactively.
The current ray-tracing approach alone won't solve all the problems that computer-graphics researchers are tackling, Hart adds. It's still impossible to perfectly simulate the human face. "This is an elusive goal," he says, "because as we get more realistic ... subtle errors become more noticeable and, in fact, more creepy. Once we get faces right, we will need high-quality methods like ray tracing to render them, and we'll want it in real time."
The system is still just a research project, and the company doesn't provide a timeline for when it might make it to consumers, but technology on all fronts, including advances in multicore architecture, is advancing rapidly. Miller suspects that consumers will start to see real-time ray tracing in products within the next five years.
Posted by
Sam
at
9:35 AM
0
comments
Wednesday, August 29, 2007
Researchers at the University of California, Santa Barbara (UCSB), have designed a silicon-based laser that emits ultrashort pulses of light at high frequencies--two characteristics that are crucial if silicon-based lasers are to become practical. Eventually, the researchers hope that the new laser could replace other, more expensive lasers in optical communication networks. It could even lead to faster computers that shuttle data around using light instead of electricity.
Modern telecommunications networks use three distinct gadgets--lasers, modulators, and detectors--to produce, encode, and detect light. Currently, all three are made of nonsilicon semiconductors, such as indium phosphide, that are difficult to mass-produce; as a consequence, they tend to be expensive and bulky. But if they could instead be made from silicon, they could be integrated on individual chips, says John Bowers, professor of electrical and computer engineering at UCSB. Devices that currently cost hundreds of dollars each could then be made in bulk for pennies, and the cost of bandwidth would plummet. The one snag in the plan is that it's hard to make silicon produce light.
In September 2006, however, the UCSB team and Intel announced a new hybrid laser that, although it still used indium phosphide, was built on a silicon base. (See "Bringing Light to Silicon.") The manufacture of the device began with a wafer that consisted of a layer of silicon dioxide sandwiched between two layers of silicon. In the top layer of silicon, the researchers etched a channel, called a waveguide, within which light bounced back and forth. To the top of the wafer, they bonded strips of indium phosphide, using a layer of glass glue only 25 atoms thick. Adding this additional layer, says Bowers, isn't much different from adding layers of other materials to silicon, something that's regularly done in today's manufacturing process.
To turn the laser on, the researchers applied electrical current to metal contacts on top of the indium phosphide. Indium phosphide is a naturally light-emitting material, so the strips of it on top of the wafer produced photons that got trapped in the channel below, bouncing back and forth along the length of the silicon waveguide. In certain materials, that bouncing is enough to amplify normal light into laser light, but not in silicon. So the device was designed to let a small amount of light, called the evanescent tail, sneak back into the indium phosphide, where it was amplified. The benefit of this design is that it avoids the costly fabrication of an indium-phosphide waveguide.
For the new laser, which is described in a recent issue of Optics Express, the researchers made their design slightly more complex. "We needed to turn it into a device with multiple sections," explains Alexander Fang, a graduate student who worked on the project. He says that he had to make sure the lengths of the cavities were precise, and that regions that amplified light and absorbed light were electrically isolated from each other.
Posted by
Sam
at
5:58 PM
0
comments
Saturday, August 25, 2007
Neuro engineering
Silicon Brains
Computer chips designed to mimic how the human brain works could shed light on our cognitive capacities.
Kwabena Boahen's lab at Stanford University is spotless. A lone circuit board, housing a very special chip, sits on a bare lab bench. The transistors in a typical computer chip are arranged for maximal processing speed; but this microprocessor features clusters of tiny transistors designed to mimic the electrical properties of neurons. Read More
Raising Consciousness
Some seemingly unconscious patients have startlingly complex brain activity. What does that mean about their potential for recovery? And what can it tell us about the nature of consciousness?
Next-Generation Retinal Implant
Scientists plan to test an implanted chip with four times the resolution of the previous version in people blinded by retinal degeneration.
Finding Hidden Tumors
Doctors at Massachusetts General Hospital are using whole-body MRI to illuminate a tricky disease.
MRI: A Window on the Brain
Advances in brain imaging could lead to improved diagnosis of psychiatric ailments, better drugs, and earlier help for learning disorders.
A Brain Chip to Control Paralyzed Limbs
Research is under way to make a brain chip capable of triggering muscle movement.
Brain Chips Give Paralyzed Patients New Powers
A neural implant allows paralyzed patients to control computers and robotic arms -- and, maybe one day, their own limbs.
Brain Electrodes Help Treat Depression
Studies suggest that deep brain stimulation could effectively treat depression.
Posted by
Sam
at
1:53 PM
1 comments
Researchers at Microsoft and Mitsubishi are developing a new touch-screen system that lets people type text, click hyperlinks, and navigate maps from both the front and back of a portable device. A semitransparent image of the fingers touching the back of the device is superimposed on the front so that users can see what they're touching.
Multitouch screens, popularized by gadgets such as PDAs and Apple's iPhone, are proving to be more versatile input devices than keypads. But the more people touch their screens, says Patrick Baudisch, a Microsoft researcher involved in the touch-screen project, the more content they cover up. "Touch has certain promise but certain problems," he says. "The smaller the touch screen gets, the bigger your fingers are in proportion ... Multitouch multiplies the promise and multiplies the problems. You can have a whole hand over your PDA screen, and that's a no go."
The current prototype, which illustrates a concept that the researchers call LucidTouch, is "hacked together" from existing products, says Daniel Wigdor, a researcher at Mitsubishi Electric Research Lab and a PhD candidate at the University of Toronto. The team started with a seven-inch, commercial, single-input touch screen. To the back of the screen, they glued a touch pad capable of detecting multiple inputs. "This allowed us to have a screen on the front and a gesture pad [on the back] that could have multiple points," says Wigdor. "But what that didn't give us was the ability to see the hands." So, he says, the researchers added a boom with a Web camera to the back of the gadget.
The image from the Web camera and the touch information from the gesture pad are processed by software running on a desktop computer, to which the prototype is connected. The software subtracts the background from the image of the hands, Wigdor explains, and flips it around so that the superimposed image is in the same position as the user's hands. Additionally, pointers are added to the fingers so that a user can precisely select targets on the touch pad that might be smaller than her finger. In October, a paper describing the research will be presented at the User Interface Software and Technology symposium in Rhode Island.
Admittedly, this prototype has several limitations. Most glaringly, it's impractical to attach a boom and camera to the back of a handheld device. In their paper, the researchers suggest a number of different approaches for more-compact LucidTouch prototypes. The gesture pad on the back could actually provide an image of the user's fingers as well as touch information, explains Wigdor. The pad uses an array of capacitors, devices that store electrical charge. Fingers create a tiny electrical field that changes the capacitance of the array, depending on their distance from it. This distance can be tuned, says Wigdor, so that the pad can register the entire finger, and not just the fingertip touching it. Another approach, he says, would be to use an array of tiny, single-pixel light sensors that could map fingers' locations. Or the device could use an array of flashing, infrared-light-emitting diodes; sensors would then detect the light's reflection off of a hand, Wigdor explains.
As touch screens shrink, says Scott Klemmer, a professor of computer science at Stanford University, one of the biggest problems users face is inadvertently covering up content with their fingers. LucidTouch, he says, "distinguishes itself in two ways: first, it provides better feedback about where you are ... and the other distinction is that it's multitouch."
Even with their prototype's cumbersome design, the researchers were able to write applications for it and gather user responses from a small group. Depending on the application, users found that touching the back of the screen could be useful. For instance, most preferred to type on a Qwerty keypad using the front of the screen. But when the keypad was split down the middle, and one half was placed vertically along each side of the screen, most preferred to type on the back of the device. Half of the participants preferred using the back of the device for tasks such as dragging objects and navigating maps. The users were also divided on whether the superimposed images of their fingers were helpful. Two-thirds of the participants preferred the superimposed images when using the keyboard and dragging objects, and half preferred them while using the map.
These results suggest that a user's preference for LucidTouch and pseudo-transparency depends on the application. Baudisch suspects that one of the first places that this technology could appear is in portable gaming, where specific games could be written for the technology. But importantly, it could enable people to start thinking differently about the potential of multitouch screens on handhelds.
"I think--zooming out for a moment--what's really exciting about this time is that for so many years, we've seen the dominance of the mouse," says Stanford's Klemmer. "I think that hegemonic situation is now over. What this points to for me is the idea that we're going to see this increased diversity of devices that adapt to different situations."
Posted by
Sam
at
1:45 PM
0
comments
Uncrating a 103-inch Panasonic Plasma (Gallery)
Posted by
Sam
at
12:59 PM
0
comments