counter for iweb
Website
Silicon Photonics

Published book, click here

« Q&A with Kotura's CTO: Integration styles and I/O limits | Main | Does Cisco Systems' CPAK module threaten the CFP2? »
Tuesday
Oct162012

Silicon photonics: Q&A with Kotura's CTO

A Q&A with Mehdi Asghari, CTO of silicon photonics start-up, Kotura.  In part one, Asghari talks about a recent IEEE conference he co-chaired that included silicon photonics, the next Ethernet standard, and the merits of silicon photonics for system design.

Part 1

 

"Photons and electrons are like cats and dogs. Electrons are dogs: they behave, they stick by you, they are loyal, they do exactly as you tell them, whereas cats are their own animals and they do what they like. And that is what photons are like."

Mehdi Asghari, CTO of Kotura

 

Q: You recently co-chaired the IEEE International Conference on Group IV Photonics that included silicon photonics. What developments and trends would you highlight? 

A: This year I wanted to show that silicon photonics was ready to make a leap from an active area of scientific research to a platform for engineering innovation and product development.

To this end, I needed to show that the ecosystem was ready and present. Therefore, a key objective was to get the industry more involved with the conference. "This has always been a challenge," I was told.

To address this issue I asked my co-chair, MIT's Professor Jurgen Michel, that we appoint joint-session chairs, one from industry and one from academia. We got people we knew from Google, Oracle and Intel as co-chairs, and paired them with prominent academics and asked them to ensure that there were an equal number of industry-invited talks in the schedule. We knew this would be a major attraction to industry attendees. We also got the industry to fund the conference at a level that set an IEEE record.

A key highlight of the show was a boat cruise journey on San Diego bay with Dr. Andrew Rickman as speaker, sharing his experiences and thoughts about setting up the first silicon photonics company - Bookham Technology - over 20 years ago.

Among other distinguished industry speakers we had Samsung telling us of the role of silicon photonics in consumer applications, Broadcom on the need for on-chip optical communication, Cisco on the role of silicon photonics in the future of the Internet, and Google on its broadband fibre-to-the-home (FTTh) initiative and what silicon photonics could offer in this area.

Oracle also shared its latest development in silicon photonics and the application of the technology in their systems, while Luxtera discussed the latest developments in its CMOS photonics platform, particularly the 4x25 Gigabit-per-second (Gbps) platform.

We also heard about the latest germanium laser development at MIT and had an invited speaker to talk about what III-V devices could do and to provide a comparison to silicon to make sure we are not blinded by our own rhetoric.

We ended up with a record number of attendees for the conference and, perhaps more importantly, close to half from industry; a record and vindicated my motivation and perspective for the conference and that silicon photonics is ready and coming.

 

Was there a trend or presentation at the IEEE event that stood out?

There are two areas creating excitement. One is the germanium laser. This is a topic of significant interest because these devices can operate at very high temperatures and therefore they can be next to the processor or ASIC. This can be a game-changer in how we envisage photonics and electronics being integrated.

We have germanium detectors and at Kotura we are working very hard to get a germanium electro-absorption modulator. We have shown this device can be extremely small and low power. And it can operate at very high speed - we have observed 3dB bandwidths in excess of 70GHz which means you can think of 100 Gigabit direct modulation for a device only 40 microns long and with a capacitance of a few femtofarads. So in terms of RF power, the dissipation of this device is virtually zero.

I would say the MIT group is probably leading the [germanium laser] efforts. They reported on room-temperature, current-driven laser emission which is very exciting. The efficiency of these lasers are still low for commercial applications; they probably have to improve by a factor of 100 or so. But given the progress we've seen in the last two years, if they keep going at that pace we may have viable germanium lasers in a couple of years. Then someone in industry has to take that on and turn it into a product and that is usually the hardest part.

This is exciting because that enables us to forget about off-the-chip lasers and integrate them in the device. We can then give up a whole bunch of problems. For example, the high temperature operation of the III-V devices is a real limit for us. Electronic devices can give off 100W and operate at 120oC, whereas optical devices often have to be stabilised, may go through multiple packaging layers, and the heat dissipation is usually directly related to cost.

If you could end up with a germanium laser that is happy at high temperatures - and we know our detectors and modulators work at high temperatures, and we know we can use electronic packaging to package these devices - then we can put these lasers next to the processor and address the bandwidth limitations that ASICs are facing today.

 

"Wavelength division multiplexing (WDM) is effectively a zero-power gearbox"

 

What was the second area?

The other area that was very interesting is graphene, a new material people are starting to work with and putting on silicon. They [researchers] are showing very low power, very high speed operation. It is still at a research level but that is another area we should watch.

 

The IEEE has started a group looking at the next speed Ethernet standard. No technical specification has been mentioned but it looks that 400 Gigabit Ethernet (GbE) will be the approach. Do you agree and what role can silicon photonics play in making the next speed Ethernet standard possible?

Industry is busy arguing about the different ways of doing 100 and 400GbE, and perhaps forgetting the fact that we have been here before.

The simple fact is that people always go for higher bit rate when it is cost-efficient and power-efficient to do so. After that, wavelengths are used.

Wavelength division multiplexing (WDM) is effectively a zero-power 'gearbox', mixing the signals in the optical domain. You do pay a power penalty for it in the form of photons lost in the multiplexer and demultiplexer. However that is not significant compared to the power consumption of an electronics gearbox chip.

Once we have exploited line rate and wavelength division multiplexing, we come to more complex modulation formats and pay the associated power and complexity penalty. Of course, more channels of fibre can always carry more information bandwidth but that is just a brute force solution that works while density and bandwidth requirements are moderate.

I think the right 100 Gigabit is based on a WDM 4x25 Gig solution. This can then scale to 400 Gigabit by adding more wavelengths, and can then scale to 1.6 Terabits. We have already demonstrated this in a single chip and will demonstrate this later in the form of a QSFP 100Gbps.

 

How does the interface scale to 1.6Tbps?

Our devices are capable of running at 40 or 50Gbps, depending on the electronics. The electronics is going to limit the speed of our devices. We can very easily see going from four channels at 25Gbps to 16 channels at 25Gbps to provide a 400 Gigabit solution.

We can also see a way of increasing the line rate to 50Gbps perhaps, either a straightforward NRZ (non-return-to-zero) line rate or some people are talking about multi-level modulation, PAM-4 (pulse amplitude modulation) type of stuff, to get to 50Gbps.

The customers we are talking to about 100Gbps are already talking about 400Gbps. So we can see 16x25Gbps, or 8x50Gbps if that is the right thing to do at the time based on the availability of electronics.

To go to 1.6 Terabit transceivers, we envisage something running at 40Gbps times 40 channels or 50Gbps times 32 channels. We already have done a single receiver chip demonstrator that has 40 channels, each at 40Gbps.

These things in silicon are not a big deal. The III-V guys really struggle with yield and cost. But you can envisage scaling to that level of complexity in a silicon platform.

 

Silicon photonics is spoken of not just as an optical platform like traditional optical integration technologies, but also as a design approach, making use of techniques associated with semiconductor design. The implication is that the technology will enable designs and even systems in a way that traditional optics can't. Can you explain how silicon photonics is a design approach and just what the implications are?

I think this is a key promise of silicon photonics, but perhaps one that has been oversold in recent years.

The key here is that given the maturity of the silicon processing capabilities, process simulation tools available and inherent properties of silicon, it is possible to predict the performance of the optical circuits far better in this platform than in any other before it. I think this is true and very valuable, potentially even a game changer.

However, we have to realise that there still remains an inherent difference between electrons and photons and their behavior in such circuits. Photons remain in a quantum world in such circuits, where the wavelength of light is comparable to feature sizes we manufacture. Hence we are dealing with a statistical quantum process whether we like it or not.

In summary, silicon will be a key enabler for on-chip system design, but it is too early for the university courses to stop graduating photonics PhDs!

 

So there is an advantage to silicon photonics but are you saying it is not that simple as using mature semiconductor design techniques?

Photons and electrons are like cats and dogs. Electrons are dogs: they behave, they stick by you, they are loyal, they do exactly as you tell them, whereas cats are their own animals and they do what they like. And that is what photons are like.

So it is really hard to predict what a photon does. The dimensions that we use for the structures we make are of the size of the wavelength of a photon. And that means it is more of a hit-and-miss process - there is always stray light, the stray light has a habit of interfering and you can always get unpredicted results.

When I interact with my electronic partners I find that they go through 6-9 months of very detailed simulation. They have very complex simulation tools.

When you come to photonics for sure we can borrow some of these simulation tools, we can simulate the process because we are using silicon. However some of the tolerances that we need are beyond what the silicon guys need, and the way the photons behave is very different. So in the end we don't spend 9 months simulating; we spend a month simulating and 3 months running the process and optimising it and re-running it and re-optimising it.

We end up with a reverse situation where the design is only 3 months, and the interaction with the designer and the manufacturing process is a 9-month process. So this is more of an iterative process. It is not as mature and a little bit more statistical. 

 

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>