Thursday, December 20, 2012

How laser 'printing' builds DNA

The concept of using lasers to synthesize DNA with a specified genetic sequence intrigued me so much that I tried to describe it in my October Photonic Frontiers feature. After receiving a grant from the National Science Foundation, the company behind the idea, Cambrian Genomics (San Francisco, CA), has released new details on the process, and my speculation about its nature turned out to be wrong.

Previously, DNA synthesis has been a two-stage assembly process. First individual base pairs are assembled into "oligonucleotide" sequences of 60 to 100 base pairs. Then, a number of those longer chains are stitched together into the synthetic DNA. The process is time-consuming and costs 30 to 50 cents per base pair, a number which adds up for long sequences. I had thought they might be using lasers to manipulate the base pairs into place.

Instead, Cambrian Genomics uses microarray cloning to mass-produce a million oligonucleotides in parallel, a process that has been tried before, but was hampered by the high error rates of microarray synthesis. To overcome that problem, Cambrian synthesizes large volumes of oligonucleotide fragments on microarrays, then uses massively parallel DNA sequencing to sort the different DNA variants and identify those with the desired sequence. Then, says Cambrian founder and CEO Austen Heinz, "we use laser catapulting, also known as laser-induced forward transfer, to eject clonal DNA populations," which were identified as having the desired sequences. The process is a variation on laser capture microdissection, which can excise part of a cell and move it to a desired location without damaging DNA. High-speed laser pulses then eject beads carrying the desired sequences in the right order to assemble into genes on a 384-well plates, as shown in the diagram.
Cambrian Genomics process uses lasers to select oligonucleotides with the desired sequence.
The goal, Cambrian wrote in a summary of its application for a phase-one Small Business Innovation Research (SBIR) grant, "is to be able to recover tens of thousands of sequence-verified oligonucleotides in several hours from sequencer flowcells."  NSF announced on December 5, 2012, a $150,000 grant that will run through the first six months of 2013. Cambrian hopes that will open the door to disruptive reductions in the cost of DNA synthesis.

Wednesday, December 12, 2012

Display technology getting ahead of the market

Peter Jackson's decision to shoot The Hobbit at 48 frames per second brought optical technology into many holiday-party conversations, at least among technologists and movie buffs. Together with demonstrations of video screens with horizontal resolution of 8000 pixels, it raises the question of whether the cutting edge of large-screen display technology is getting too far ahead of the market.

From the production side, it makes sense to record a movie in the best quality available at reasonable cost. It's easy to reduce resolution or frame rate to current mass-distribution standards. Theaters can charge extra for the highest quality screenings, as they have done for 3D. And archival copies should be compatible with the next generation or two of technologies.


From the display side, reviewers had mixed reactions. They found some parts spectacular, but sometimes too revealing. As Lucy O'Brien wrote on the gaming site IGN.com, "The problem with doubling the frame-rate in The Hobbit is a problem of scrutiny; you can see all its tricks."

The push for higher video screen resolution comes largely from the consumer electronics industry. Aided by government mandates to convert to digital broadcasting, the industry persuaded the public to switch to flat-panel high-definition televisions showing 720 or 1080 lines, corresponding to widths of 1280 or 1920 pixels respectively. But the public largely passed on 3D television, and in uncertain times they have been slow to step up to larger screens, so manufacturers have slashed prices to bolster sales.

Two ultra-high-definition formats are in development. One that doubles resolution is called 4K, for a nominal width of 4000 pixels (actually 3840 x 2160 pixels). An alternative called 8K quadruples resolution to a nominal width of 8000 pixels (actually 7680 x 4320 pixels). Some 4K equipment is available, and 8K has been demonstrated. However, big challenges remain, writes Pete Putman of Display Daily, including lack of production equipment and cameras, high screen costs, and the need for much more bandwidth to carry the larger files.

Unlike 3DTV, ultra-high-def won't give you a headache or require special glasses. It makes sense for future-proofing video production, and it could be a selling point for video venues or sports bars.  But for now, ultra-high-def has gotten far ahead of the home television market, which is getting to like today's low prices.

Friday, November 30, 2012

Bright future for silicon


The Wiley-VCH journal ChemPhysChem issued an embargoed press release embargoed early on the morning of November 21, 2012, heralding "a bright future for silicon." Just eight hours later, they lifted the embargo, citing "early reporting" of the research by Brian Korgel of the University of Texas (Austin, TX) and colleagues.

Embargo breaks often indicate hot stories, and the headline hinted at an important step toward the elusive goal of efficient light emission from silicon. Yet the next line was more muted: "Ordered nanocrystal arrays may provide a new platform to study and tailor the light-emitting properties of silicon." What is the real story?

Silicon is a wonderful material for electronics, but its photonic uses have been hobbled by an indirect bandgap that makes it very hard for electrons dropping into the valence band to release their energy as photons. That leaves silicon far behind III-V compounds like gallium arsenide for LEDs and diode lasers. Yet silicon is far ahead of other semiconductors in electronics, and companies like Intel (Santa Clara, CA) want to integrate photonics into their integrated circuits.

So far they have demonstrated "silicon lasers" by optically pumping Raman lines in silicon and III-V diode laser chips bonded to silicon. Both were important advances. But neither met the real goal--electrically powered emitters based on silicon that could be integrated into standard semiconductor chip production processes.

In their ChemPhysChem paper, Korgel and colleagues take a different approach, tapping the bright luminescence produced by silicon quantum dots. They write that their major achievement is devising a chemical technique that causes self-assembly of "the first colloidal Si nanocrystal superlattices." Self-assembly is essential because individual dots are too small to fabricate by conventional photolithography, and transmission electron microscope images show the dots are closely spaced in regular face-centered-cubic arrangements (see photo).

TEM image silicon nanocrystals in the 111-oriented (c) and 112-oriented (d) plans, with depictions of the crystalline structures shown in insets. (Courtesy Yixuan Yu et al., ChemPhysChem, Wiley-VCH Verlag GmbH & Co. KGaAhttp://dx.doi.org/10.1002/cphc.201200738 [2012]. Reproduced with permission)

The authors say that covalent bonds with the hydrocarbon solvent make the silicon-nanocrystal superlattices stable to 350 degrees Celsius, higher than other similar superlattices. That's encouraging news, because self-organized nanocrystals are a promising fresh approach to structuring silicon to emit light more efficiently. But so far electrical excitation--sought for integrated optoelectronics--has far to go to match the efficiency of optical excitation of isolated silicon quantum dots. So Korgel is understandably optimistic about having "a new playground for understanding and manipulating the properties of silicon in new and unique ways," and is appropriately cautious in not claiming silicon lasers are just around the corner.

Monday, November 19, 2012

Making solid-state lighting fun


Solid-state lighting is a clean, green new market for optical technology, but it's hard to get very excited about white LEDs that merely replace older incandescent and fluorescent bulbs. Now, Philips is trying to make solid-state lighting fun with wirelessly controlled color-tunable bulbs called "Hue".

A Hue bulb screws into a standard light socket and contains red, green, and blue LEDs. A smartphone or iPad app controls the bulb's output through a wireless controller and a wireless receiver in the bulb. The app matches the LED outputs colors selected from a rainbow palette in the app, or from the user's favorite photos. Users can pick bright disco colors, shades of white from candlelight to sunlight, or anything in between.

A $200 starter set including the controller and three bulbs sounds like an impulse buy at the Apple Store -- and that's exactly where Philips is selling it, as a fun gadget. A single 600-lumen Hue bulb will set you back $60, more than triple the price of a Philips Ambient bulb that emits a pleasant white light. But playing with colored lights is much more fun, as Philips shows in a video.

The Hue isn't just a party light. You can set it to emit shades of white from a bright "energize" tone to start the morning to a warm "relax" shade to unwind in the evening. You can set each bulb to turn on and off when you want it. So it's an all-purpose adjustable light ready to put into any socket in the house, without costly rewiring.

Philips is first to market, but company is coming. LiFx (San Francisco, CA) in September sought support on Kickstarter to develop their own smart bulb, and was surprised to receive $1.3 million in pledges when they had sought only $100,000. They have demonstrated a bench version and now are designing a production prototype, which will include a white LED as well as the RGB emitters.

So far press attention has focused on controls and tunable colors, but I wonder what the green sources are. Philips is using a "lime green" LED from its LumiLEDs division because it gives better color rendering than standard green LEDs, but won't disclose the wavelength or composition. Is it a hard-to-make green LED, a phosphor-LED hybrid, or something else?  If anybody out there has a spectrophotometer and a Hue at hand, it would be interesting to see a spectrum.


iPhone sets a Philips Hue bulb to "relax" for a calming evening. (Courtesy of Philips Lighting)

Tuesday, October 30, 2012

IEEE recognizes fiber laser milestone


Fiber lasers and amplifiers can do incredible things, but the technology is not as new as you think. Half a century ago, Elias Snitzer and a handful of colleagues at the American Optical Company's Research Center in Southbridge, MA pioneered both technologies. On October 26, 2012, I attended the dedication of plaque recognizing the achievement as a Milestone in electrical technology by the Institute of Electrical and Electronics Engineers.

Founded in the 19th century to make spectacles, American Optical in 1954 became the first company to try to develop practical fiber-optic imaging bundles, which were first demonstrated by academics and an independent inventor working on shoestring budgets. Initially funded by the Central Intelligence Agency to develop image scrambling bundles for secure messaging, AO later developed imaging bundles.

AO hired Snitzer to work on fiber optics in 1959. At his job interview, he recognized the puzzling patterns in a fiber bundle as evidence of lateral modes, and later published the first analysis of single-mode transmission. Interested in the laser, Snitzer took advantage of AO's glass expertise to make a solid-state laser of glass rather than crystals. He formed barium crown glass doped with 2% neodymium oxide into a three-inch rod thinner than a millimeter, covered with a low-index glass cladding to improve light transmission.  Pumping with a coiled flashlamp like the one Theodore Maiman used in the ruby laser, Snitzer demonstrated pulsed lasing in the stiff neodymium-doped fiber at room temperature in 1961.

In 1963, Snitzer and Charles Koester amplified pulses by up to a factor of 50,000 in a meter-long fiber laser without reflective end coatings, coiled around a linear flashlamp. Their goal was to measure gain dynamics, but the demonstration also showed the potential of fiber amplifiers.

The lack of good pump diodes kept fiber lasers and amplifiers from being practical until the 1980s. Snitzer played an important role in that development, developing doped fiber sensors, demonstrating 1480 nm diode pumping for erbium-fiber amplifiers and developing dual-core fibers now used in high-power fiber lasers. Snitzer died in May, but lived to see developments including multi-kilowatt fiber lasers and high-speed communications through fiber amplifiers in the global telecommunications network. Four of his children who attended the Milestone dedication were pleased by the recognition of the man they knew as "dad."


IEEE Milestone for fiber lasers and amplifiers, across the street from former American Optical headquarters in Southbridge, MA. (Courtesy of Dick Whitney)

Friday, October 12, 2012

Nobel Prize for quantum optics

The award of the 2012 Nobel Prize in Physics to Serge Haroche and David Wineland is the latest in a series of Nobel Prizes honoring elegant experiments using light to illuminate fundamental physics. The Swedish Academy of Sciences cited the two "for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems." By examining individual photons and atoms, they resolved big questions about quantum mechanics.

Physicists long wondered how seriously they should take the paradoxes that arise from applying quantum mechanics rigorously to the behavior of individual particles. Albert Einstein famously called the concept of entangled particles "spooky action at a distance," but recent experiments have shown that such entanglement is real, and can be used for quantum encryption. Other recent experiments have observed quantum behavior of individual particles, and manipulated that behavior so that quantum states can be superposed for purposes such as quantum computing.

Haroche and Wineland developed complementary techniques for quantum manipulation of single particles. Haroche pioneered cavity quantum electrodynamics, which studies how an electromagnetically resonant cavity can affect quantum properties of an atom contained inside it, including spontaneous and stimulated emission. Working with microwave and optical cavities, his group measured photon properties without destroying the quantum states. Wineland and his colleagues used light to trap ions in ways that allowed them to transfer and superpose states of an ion. They were able to create single-quantum "Schrödinger's cat" states in the laboratory and watch them change from a quantum superposition to a classical mixture. Their work has opened the door to quantum computing and new types of optical clocks. 

Haroche holds the chair in Quantum Physics at the Collége de France (Paris, France), and is well-known for his research in quantum optics and quantum computing, and for his major contributions to cavity quantum electrodynamics, the behavior of atoms and light in high-Q cavities. He is work has earned him a long list of awards, including the Townes Award in 2007 from the Optical Society of America and the Herbert Walther Award from the German Physical Society and OSA in 2010. His deep roots in the optics community include doing his doctoral dissertation under Claude Cohen-Tannoudji and postdoctoral research under Arthur Schawlow, both future Nobel laureates. 

Wineland wrote his doctoral dissertation at Harvard University under Norman Ramsay, another Nobel Laureate, and heads the ion-storage group at the National Institute of Standards and Technology (Boulder, CO). He demonstrated the first laser cooling in 1978, and has used that technique to study quantum mechanics and develop applications. He demonstrated the first single-atom quantum logic gate in 1995, showing the potential of quantum computing, and later demonstrated entanglement of two and four ions. Other achievements include demonstrating quantum teleportation and a quantum logic atomic clock, which is now the world's most precise atomic clock. His long list of awards includes the Schawlow award in laser science from the American Physical Society, OSA's Frederick Ives award, and the first Herbert Walther award in 2008.

David Wineland has won the 2012 Nobel Prize in Physics, along with Serge Haroche. (Image courtesy of
Geoffrey Wheeler/NIST
)

Monday, September 24, 2012

NIF falls short of ignition


The National Ignition Facility (NIF) will not meet its goal of igniting a fusion plasma before the end of September, the Lawrence Livermore National Laboratory (Livermore, CA) said on Friday. A spokeswoman says Livermore "will continue working toward achieving ignition." The laser is delivering the desired energy, but the target shots are not yielding the expected fusion energy.

NIF was declared complete on March 31, 2009, after it had delivered 1.1 MJ pulses at 355 nm. The 192-beam system was designed to deliver 1.8 MJ pulses, which simulations indicated would be sufficient to ignite a pellet of deuterium-tritium fusion fuel, producing fusion reactions that yielded more energy than the input pulse. The Department of Energy set a target of reaching ignition by September 30, 2012--the end of the fiscal year.

Wary of optical damage, Livermore ramped pulse power and energy slowly. The first 1.8 MJ pulse was not fired until March of this year. On July 5, NIF delivered peak power of 500 tW to a target for the first time in a 1.85 MJ pulse. From outside, it looked like NIF should be closing in on ignition.

But now NIF has become the latest in a long list of fusion lasers that yielded experimental results well short of predictions. A news story in the September 21 issue of Science magazine reports that although computer models predict NIF shots should achieve ignition, the yield of fusion energy from NIF experiments has so far reached only 0.1 of the ignition level.

The National Nuclear Security Administration (NNSA) has already begun studying its options. The first draft of a report is due October 1, with a final report due to Congress on November 30.

Meanwhile, NIF continues firing shots that can produce temperatures and pressures far beyond anything previously possible on the surface of the Earth. Livermore fusion researchers will keep pressing for ignition, and NNSA weapon scientists will get additional shots for their simulations of nuclear explosions as part of the agency's Stockpile Stewardship program.














NIF's laser bay, showing 96 of the 192 beamlines.

Tuesday, September 18, 2012

Light guides light up 3M solid-state bulbs


3M has added a new twist to solid-state lighting--embedding light guides in the outer shell of the bulb to redistribute light emission evenly across its surface like the venerable frosted-glass incandescent bulb.

Solid-state lighting has been widely touted for its outstanding energy efficiency. LED bulbs now in hardware stores draw 13 W of electric power, emit as much visible light as 60 W incandescents, and have lifetimes of 25,000 hours, far beyond 1000-hour incandescents. But high prices and some subtle but significant problems are slowing their adoption.

The 3M bulb is aimed at one of those subtle problems. LEDs emit directionally from a small area. Hot filaments and fluorescent tubes are omnidirectional, and although filaments are small, frosted incandescent bulbs scatter the light so it seems to radiate from entire surface. Directionality is good news for applications that want light concentrated in one direction, such as street lighting outdoors and downlighting in homes and offices. But it can be a problem in light fixtures in the line of sight, especially when the light comes from a small area. An example is a non-name solid-state lamp I bought earlier this year from a big-box hardware store. Light comes from a small zone where blue LEDs and yellow phosphor are mounted, not from the bulb's frosted surface, producing an unpleasant glare.

Deep inside, the 3M bulb contains similar blue LEDs with yellow phosphors to generate directional white light. But instead of shining directly into the room, the light is coupled into light guides embedded in the bulb. Total internal reflection guides the light around the bulb to areas where the light is scattered out the surface and into the room, as shown in the figure. That reduces brightness to an acceptable level, making the bulb much more presentable in a light fixture.


The light guide in the 3M LED bulb carries light from the LED source to diffusing areas on the bulb surface. (Courtesy of 3M)

The bulb, shown in the photo below, can't be mistaken for an incandescent. It needs slits to dissipate heat, a cooling problem that it shares with other LED bulbs, and requires heat sinks that add to its environmental impact. But the design is an innovative step in the right direction, making LED lamps an attractive piece of decor rather than an efficient eyesore.


3M's Advanced LED light distributes light like an incandescent bulb. (Courtesy of 3M)

Friday, September 7, 2012

DARPA PULSE program

Ultrafast laser research has produced some elegant science, from slicing time into incredibly thin slivers to generating combs of frequencies uniformly spaced across a wide band of the spectrum. These capabilities, in turn, have led to a similarly wide range of applications, including transferring time and frequency standards, measuring short intervals of time, and producing pulses so short that they generate extremely high peak powers with only modest amounts of energy.

However, ultrafast lasers traditionally have been bulky and complex things, custom-assembled on optical tables and delicately aligned in a laboratory. That complexity makes it hard to realize many potential practical applications such as putting frequency combs in space to boost the precision of GPS systems or to measure stellar spectra with extreme precision. Now the Defense  Advanced Research Projects Agency (Arlington, VA) is trying to do something about the problem by creating the Program in Ultrafast Laser Science and Engineering.

DARPA is not the first to think of making smaller and more durable ultrafast lasers. I mentioned the need for "robust frequency combs" for telecommunications systems or space-based instruments in the January Photonic Frontiers. A web search four pages which include the phrase "rugged femtosecond laser," but all of them cite an Army contract awarded last year to Arbor Photonics. However, such references are few and far between, and Google could not find a single page using the phrase "rugged frequency comb" (or combs) when I was writing this blog.








Shrinking the size and improving the robustness of ultrafast lasers is a big challenge, but success could pay off in important ways. DARPA cites some potential military applications that require rugged sources. One is using the time stability of the microwave-band repetition rate of a femtosecond laser to greatly reduce the close-to-carrier phase noise in a microwave oscillator. Others include transferring time or frequency measurements across the spectrum, and generating high-flux isolated attosecond pulses. Civilian science and technology also would benefit from compact  sources of ultrashort pulses.

As is normal with DARPA, success is not guaranteed, but the payoff could be high. In fact, somebody at DARPA surely should have already earned credit in the Pentagon bureaucracy for exceptional skill in acronym creation. Program in Ultrafast Laser Science and Engineering neatly translates into an entirely appropriate acronym -- PULSE.

Wednesday, August 22, 2012

Naming nanolasers

My Photonic Frontiers article coming up in the September issue of Laser Focus World describes recent progress on nanoscale lasers, having volumes smaller than a cubic wavelength. Such emerging technologies are fascinating, but also raise a peculiar problem for those of us who write about them: what do we call the things?

Some groups call their nanoscale lasers "spasers," an acronym for Surface Plasmon Amplification by the Stimulated Emission of Radiation. Surface plasmons are involved in the process, and the catchy term has gained its own Wikipedia entry, some 266,000 hits in a web search, and a fair amount of press coverage even before a paper in the July 27 issue of Science. Score a few points for savvy PR.

But other researchers prefer more general terms like "nanolasers." One reason is that the acronym for spaser defines a specific process--surface plasmon amplification by stimulated emission of radiation. Yet it's not clear that all nanoscale lasers demonstrated so far rely in that process, and some researchers wonder how stimulated emission in a tiny piece of semiconductor can amplify a surface plasmon, which is a group of oscillating electrons on a conductive surface.

A second reason is more philosophical, that "laser" has become a generic term. As Shaya Fainman of the University of California at San Diego (La Jolla, CA) told me, "any time I see light amplification by stimulated emission, I call it a laser." By that logic, if a nanoscale device is amplifying light by stimulated emission, it's a laser.

There are points to be made for both sides, but there also is another dimension to the discussion--defining a new term can be part of claiming credit for a discovery. The International Commission on Zoological Nomenclature has elaborate rules on the proper naming of living or extinct animal species. No such rules exist in physics, so terms compete on their own merits. Interestingly, Gordon Gould's term "laser" won the popularity contest over Charles Townes' original suggestion of "optical maser," but the Nobel Prize went to Townes.

Who eventually will be credited with inventing nanoscale lasers remains to be determined. For now, I'm using "nanolaser" as a generic term for nanoscale laser, as I did in an earlier article. But I'm also watching for future developments in the fast-moving field.

Monday, August 6, 2012

3D falls flat for Olympics

The past few years have seen some impressive innovations in three-dimensional displays. New digital projectors have made 3D movies come alive in theaters, and high-resolution flat-panel displays can bring 3D television to homes. Digital image processing and 3D helped make Avatar the highest-grossing movie of all time. At the peak of 3D enthusiasm, some in Hollywood predicted that soon 3D production would become standard for movies.

Live sports was supposed to be the next great frontier for 3D, and Panasonic and Olympic Broadcasting Services sent crews to London to record 200 hours of the Summer Olympics in 3D. But the effort seems to have fallen flat. Chris Chinnock reports on Display Daily that the BBC logged an average UK audience of 24 million people for the opening ceremonies, only 111,000 households watched the 3D simulcast, a figure he called "pretty dismal." My own informal poll of a small newsgroup discussing the Olympics found no one who cares about 3D, and one who had never bothered to set up the 3D on his Playstation 3. 

Why did 3D fall flat for the world's biggest sport spectacular? It's tempting to blame the lack of promotion, the difficulty of finding 3D coverage, and the decision to delay all 3D broadcasts by 24 hours. But the truth is that few people outside of the consumer industry show much interest in 3D television. Properly done, 3D can be fun—for a limited time. I enjoyed playing with a 3D set in the video store, but the amusement wore off in 15 minutes. I can see where the 3D versions of some movies might be worth a few extra dollars in the theater. But the monsters in the lap gimmick gets old fast, viewers dislike the active shutter glasses for 3D televisions, and too much intense 3D can cause eyestrain and nausea.





A refreshable holographic image of an F-4 Phantom jet is created on a photorefractive polymer. (Courtesy of the University of Arizona)


New technology from NLT Technologies (Kawasaki, Japan) presents different views to both eyes of several people, allowing them to see depth by the parallax effect without special glasses. However, that's no panacea because the brain senses depth in multiple ways, and conflicts between different cues lead to eyestrain, headache, and nausea. Perhaps we'll have to wait for further development of holographic video.

Tuesday, July 24, 2012

Seeing Pluto


In July, the Hubble Space Telescope spotted the fifth moon of Pluto, an icy ball 10 to 25 km across that was just a pinprick of light in the image. Much of the press coverage focused on whether that discovery should make Pluto a full-scale planet. But I was far more interested in Pluto, its moons, and the amazing optical feat of finding something so small and so far away.

My interest in optics grew from a fascination with astronomy. I'm old enough to remember the 1978 discovery of Pluto's largest moon Charon. The discovery images show a small bump on the fuzzy ball of Pluto, recorded on a photographic plate by a ground telescope. Comparison of a series of images showed that the bump moved as the unresolved moon orbited Pluto. In the days before adaptive optics, seeing even that much seemed amazing.

Hubble resolved Pluto and Charon soon after its launch in 1990. It was a badly needed success for Hubble in its troubled early years, but scattered light in the background of the photo clearly shows the spherical aberration of the telescope's primary mirror. Pluto and Charon are both blurry and diffuse, but the photo clearly shows them as separate worlds, with Pluto clearly the larger and Charon roughly half its size. Once NASA added corrective optics to fix the spherical aberration, the Faint Object Camera produced much sharper photos in 1994.


Further upgrades have made Hubble even better. In 2005, it spotted two roughly 100 km moons, later named Nix and Hydra. Last year, astronomer Mark Showalter of the SETI Institute (Mountain View, CA) began a series of Hubble observations to check for other little moons which might scatter dust into the path of the New Horizons spacecraft when it visits Pluto in July 2015. Earlier this month, Showalter downloaded a new batch of Hubble data, and in an hour was on the phone reporting the discovery. A few days later, he told me "I'm still struck by just what an amazing instrument Hubble is. This little object, [called] P5, is fainter than Pluto by a factor of 100,000 and separated by one arc second."

It's amazing and wonderful. And so far Hubble's images show New Horizons is on a good path to avoid any dangerous dust, so we can see close-ups of Pluto three years from now.