Category Archives: Microlithography

Semiconductor Microlithography

Postscript to 450-mm wafers

After posting on Why 450-mm Wafers and Why the Big Players Like 450-mm Wafers, I received a few comments from friends in the equipment supplier community talking about the effect of wafer-size transitions on the suppliers of process and metrology tools for semiconductor manufacturing. So, based on their inputs and further reflection, here are a few more thoughts on 450-mm wafers.

It is expensive to develop equipment to process larger wafers. If an equipment supplier spends a boatload of money developing new equipment, they want to sell that new equipment for a lot of money in order to recoup their investment. But their customers, the chip makers, don’t want the equipment prices to rise too much, or else the cost advantage of the larger wafer size will disappear. The goal should be a win-win sharing of the benefits of a lager wafer: the chip makers get a lower manufacturing cost per chip and the equipment makers get a higher margin on their equipment, thus paying off their R&D and making more money after that.

There is a general feeling in the industry that the transition to 300-mm wafers didn’t work out equitably: the equipment suppliers made all the investments, and the chip makers got all the benefits. And while I’m sure this version of the story is somewhat slanted, still we have seen most equipment suppliers dragging their feet on 450-mm tool development. They want the chip companies to pay up-front for development. Chip companies in turn want to get governments to foot the bill (why should a highly profitable company like Intel pay the costs needed to ensure future profits if they can get the state of New York to pay instead?). And so it has begun: the Global 450 Consortium funding tool R&D, and Intel, TSMC, and Samsung paying litho supplier ASML billions of dollars directly for 450-mm tool development.

How will a transition to 450-mm wafers affect the equipment suppliers? One effect is similar to that experienced by the chip makers: the small guys won’t survive. Only the bigger players can afford the development costs for 450-mm wafer size tools. But there has traditionally been a second effect: even the big players can’t afford the development costs of new process equipment on multiple wafer sizes.

When the industry moved to 300-mm wafers, new process tools were developed for 300-mm wafers only. Chip companies that stuck to 200-mm wafers couldn’t get the latest and greatest tools for the smaller wafer size. They were stuck in the past. Not only did they have a cost disadvantage compared to 300-mm fabs, they had a technology disadvantage as well. Staying up to speed on Moore’s Law required moving to 300-mm wafers.

Will the same thing happen at 450 mm? Maybe, but I’m not convinced that it is inevitable. As I said before, the move to 450-mm wafers will not likely be the slam-dunk cost savings that many people hope. If the cost advantage is only 10%, I suspect many companies will choose to stick with 300-mm wafers. But will the next generation of process tools be available at the smaller wafer size? If new 300-mm wafer fabs are being built, you can bet that equipment suppliers will scramble to provide them with tools.

All in all, I think the move to 450-mm wafers will be a mess. The timing is problematic, the economics are problematic, and the resemblance of the future to the past is not likely to be strong. Somehow, though, we’ll figure something out. We always do.

Why the Big Players Like 450-mm Wafers

The reason semiconductor manufacturers like the idea of 450-mm wafers is easy to understand: bigger wafers should lower the per-chip cost of manufacturing. But as I mentioned in my last post, this per-chip cost advantage doesn’t apply to lithography. Each time a wafer size is increased, only the non-litho (per-chip) costs go down, and so lithography costs take up a bigger portion of the overall costs. A corollary to this economic reality is that as lithography costs go up as a fraction of the total costs, the benefits of a larger wafer size go down. Past wafer size transitions have netted a 30% manufacturing cost reduction. The transition to 450-mm wafers will give at best a 20% cost reduction, and possibly only a 10% reduction.

Of course, these numbers are projections, and all projections are based on assumptions. It is possible to make more optimistic assumptions than I have, and that is probably what Intel, TSMC and the other big players are doing when they heavily promote 450-mm wafers. But why are the big guys so optimistic about 450-mm wafers? And why now?

As I briefly mentioned in my last post, for the switch to larger wafer sizes to be economically feasible two things must happen. First, the switch must enable a lower manufacturing cost per chip. The big players are hoping for a 30% cost reduction, but I am predicting a 10 – 20% benefit. Second, there must be sufficient demand for the chips being produced to justify a higher volume factory. A 450-mm fab will have at least double the output (in terms of chips) as a 300-mm fab. Thus, the demand for those chips must at least double to justify the building of a 450-mm fab. That’s a huge volume of chips, since 300-mm fabs are already exceedingly high-volume.

So an important effect of each wafer transition is that low-volume manufacturers can no longer compete. A 30% cost disadvantage is hard to overcome, and without the volume demand a new fab at the larger wafer size isn’t justified. The result? A successful wafer size transition is accompanied by a host of consolidations and chip companies going fabless (or fab-lite). This has happened again and again over the years. Only the biggest players survive, and the survivors get bigger.

Today, we have Intel, Samsung, Toshiba and TSMC at the top of the chip-making pyramid. But UMC, GlobalFoundries, Hynix, and Micron remain competitive irritants. What to do? A successful transition to 450-mm wafers will likely solve the problem for the big players. If 450-mm wafers result in a 20 – 30% cost advantage over 300-mm wafers, then any standard-process chip in a cost competitive space will have to be made in a 450-mm fab. But only a few of these $10B fabs will have to be built to supply that demand. And those fabs will be built by the biggest players, leaving the second tier manufacturers out of luck, and possibly out of business.

So why shouldn’t Intel, Samsung, and TSMC be bullish on 450-mm? If it works, it will mean that their dominance in the semiconductor world will be complete (maybe even pushing Toshiba out of the picture). And if EUV succeeds in keeping litho costs down, this scenario is all the more likely.

But personally I don’t think EUV will be successful at putting a lid on litho cost. As a result, I think the cost advantage of 450-mm will be closer to 10% than the 20 – 30% hoped for by the big guys. And while 10% may still be worth it for the highest-volume players, it won’t be enough to put the 300-mm fab world out of business.

That leaves one more ugly point to consider. If a transition to 450-mm wafers gives a per-chip cost reduction that is not sufficiently large to counter the rising costs of litho, then the per-chip costs overall might be higher (and maybe a lot higher) for new technology nodes. What will happen to Moore’s Law if moving to the next node no longer decreases the cost of a transistor?

We live in interesting times, and getting more interesting each day.

Why 450-mm wafers?

Why is 450-mm development so important to Intel (and Samsung and TSMC)?

A few years ago, Intel and TSMC began heavily promoting the need for a transition from the current standard silicon wafer size, 300 mm, to the new 450-mm wafers. While many have worked on 450-mm standards and technology for years, it is only recently that the larger wafer has received enough attention and support (not to mention government funding) to believe that it may actually become real. While there has been much talk about the need for a larger wafer, I’d like to put my spin on the whole debate.

First, a bit of history. Silicon wafer sizes have been growing gradually and steadily for the last 50 years, from half-inch and one-inch silicon to today’s 300-mm diameter wafers. The historical reasons for this wafer size growth were based on three related trends: growing chip size, growing demand for chips, and the greater chip throughput (and thus lower chip cost) that the larger wafer sizes enabled. And while chip sizes stopped increasing about 15 years ago, the other two factors have remained compelling. The last two wafer size transitions (6 inch to 8 inch/200 mm, and 200 mm to 300 mm) each resulted in about a 30% reduction in the cost per area of silicon (and thus cost per chip). And since our industry is enamored with the thought that the future will look like the past, we are hoping for a repeat performance with the transition to 450-mm wafers.

But a closer look at this history, and what we can expect from the future, reveals a more complicated picture.

First, how does increasing wafer size lower the cost per unit area of silicon? Consider one process step as an example – etch. Maximum throughput of an etch tool is governed by two basic factors: wafer load/unload time and etch time. With good engineering there is little reason why these two times won’t remain the same as the wafer size increases. Thus, wafer throughput remains constant as a function of wafer size, so that chip throughput improves as the wafer size increases. But “good engineering” is not free, and it takes work to keep the etch uniformity the same for a larger wafer. The larger etch tools also cost more money to make. But if the tool cost does not increase as fast as the wafer area, the result is a lower cost per chip. This is the goal, and the reason why we pursue larger wafer sizes.

As a simplified example, consider a wafer diameter increase of 1.5X (say, from 200 mm to 300 mm). The wafer area (and thus the approximate number of chips) increases by 2.25. If the cost of the etcher, the amount of fab floor space, and the per-wafer cost of process chemicals all increase by 30% at 300 mm, the cost per chip will change by 1.3/2.25 = 0.58. Thus, the etch cost per chip will be 42% lower for 300-mm wafers compared to 200-mm wafers.

While many process steps have the same fundamental scaling as etch – wafer throughput is almost independent of wafer size – some process steps do not. In particular, lithography does not scale this way. Lithography field size (the area of the wafer exposed at one time) has been the same for nearly 20 years (since the era of step-and-scan), and there is almost zero likelihood that it will increase in the near future. Further, the exposure time for a point on the wafer for most litho processes is limited by the speed with which the tool can step and scan the wafer (since the light source provides more than enough power).

Like etch, the total litho process time is the wafer load/unload time plus the exposure time. The load time can be kept constant as a function of wafer size, but the exposure time increases as the wafer size increases. In fact, it takes great effort to keep the scanning and stepping speed from slowing down for a larger wafer due to the greater wafer and wafer stage mass that must be moved. And since wafer load/unload time is a very small fraction of the total process time, the result for lithography is a near-constant wafer-area throughput (rather than the constant wafer throughput for etch) as wafer size is changed.

One important but frequently overlooked consequence of litho throughput scaling is that each change in wafer size results in an increase in the fraction of the wafer costs caused by lithography. In the days of 6-inch wafers, lithography represented roughly 20 – 25% of the cost of making a chip. The transition to 200-mm (8-inch) wafers lowered the (per-chip) cost of all process steps except lithography. As a result, the overall per-chip processing costs went down by about 25 – 30%, but the per-chip lithography costs remained constant and thus become 30 – 35% of the cost of making a chip.

The transition to 200-mm wafers increased the wafer area by 1.78. But since lithography accounted for only 25% of the chip cost at the smaller 6-inch wafer size, that area improvement affected 75% of the chip cost and gave a nice 25 – 30% drop in overall cost. The transition to 300-mm wafers gave a bigger 2.25X area advantage. However, that advantage could only be applied to the 65% of the costs that were non-litho. The result was again a 30% reduction in overall per-chip processing costs. But after the transition, with 300-mm wafers, lithography accounted for about 50% of the chip-making cost.

Every time wafer size increases, the importance of lithography to the overall cost of making a chip grows.

And there lies the big problem with the next wafer size transition. Each wafer size increase affects only the non-litho costs, but those non-litho costs are becoming a smaller fraction of the total because of wafer size increases. Even if we can achieve the same cost savings for the non-litho steps in the 300/450 transition as we did for the 200/300 transition, its overall impact will be less. Instead of the hoped-for 30% reduction in per-chip costs, we are likely to see only a 20% drop in costs, at best.

So we must set our sights lower: past wafer size transitions gave us a 30% cost advantage, but 450-mm wafers will only give us a 20% cost benefit over 300-mm wafers. Is that good enough? It might be, if all goes well. But the analysis above applies to a world that is quickly slipping away – the world of single-patterning lithography. If 450-mm wafer tools were here today, maybe this 20% cost savings could be had. But shrinking feature sizes are requiring the use of expensive double-patterning techniques, and as a result lithography costs are growing. They are growing on a per-chip basis, and as a fraction of the total costs. And as lithography costs go up, the benefits of a larger wafer size go down.

Consider a potential “worst-case” scenario: at the time of a transition to 450-mm wafers, lithography accounts for 75% of the cost of making a chip. Let’s also assume that switching to 450-mm wafers does not change the per-chip litho costs, but lowers the rest of the costs by 40%. The result? An overall 10% drop in the per-chip cost. Is the investment and effort involved in 450-mm development worth it for a 10% drop in manufacturing costs? And is that cost decrease enough to counter rising litho costs and keep Moore’s Law alive?

Maybe my worst-case scenario is too pessimistic. In five or six years, when a complete 450-mm tool set might be ready, what will lithography be like? In one scenario, we’ll be doing double patterning with EUV lithography. Does anyone really believe that this will cost the same as single-patterning 193-immersion? I don’t. And what if 193-immersion quadruple patterning is being used instead? Again, the only reasonable assumption will be that lithography accounts for much more than 50% of the cost of chip production.

So what can we conclude? A transition to 450-mm wafers, if all goes perfectly (and that’s a big if), will give us less than 20% cost improvement, and possibly as low as 10%. Still, the big guys (Intel, TSMC, IBM, etc.) keep saying that 450-mm wafers will deliver 30% cost improvements. Why? Next time, I’ll give my armchair-quarterback analysis as to what the big guys are up to.

Semicon West Lithography Report

OK, I have to admit this right off: I didn’t go to Semicon West (held two weeks ago in San Francisco). I try never to go to Semicon West (I’ve been twice in the last 30 years, both times against my will). Why should I go? To listen to the latest marketing messages and company spin? To see a few technical talks that are way too light on the technical, but still full of talk? I don’t need to walk the cavernous Moscone Center to get that – everybody plasters the Web with this stuff on a regular basis. Thanks, but I think I’ll stay home.

This year was a perfect case in point. The only real news from Semicon was in the news – Intel’s announced investment in ASML. Yes, it would have been fun to sit in a San Francisco bar each evening and dissect the press releases and develop conspiracy theories. But even that is not really necessary. I’m here to give my you take on what the Intel investment means – and you don’t even have to buy me a beer to get it. (Though if you like this post, please feel free to buy me one the next time you see me.)

Intel’s investment in ASML has two parts – related, but separate. First, Intel is spending $2.1B to buy 10% of ASML, with an option to buy another 5%. ASML will use the money to buy back the same number of its shares, so there will be no stock dilution (a so-called synthetic buyback). That also means ASML will be getting nothing (no money, I mean) from this part of the deal. ASML is also offering similar deals to Samsung and TSMC, up to 25% ownership in the company. So what does this part of the deal mean? Intel and ASML made it clear that Intel gets no voting rights and won’t get early access to ASML technology or tools. Of course, they had to say that to avoid anti-trust litigation. So does the Intel investment help anyone?

There are three reasons why the Intel investment in ASML makes sense. First, it confirms the obvious: the success or failure of ASML will be mirrored as success or failure at Intel. Lest anyone doubt it, Intel needs Moore’s Law scaling to continue its growth and profitability. Lithography is the critical technology to make that happen, and ASML is the critical company to make lithography happen. Second, even without a place on the board, Intel’s ownership stake will add financial stability to ASML, whose stock price could easily drop dramatically if its EUV program were to flirt with failure. Since ASML’s importance to the industry goes far beyond its EUV program, keeping ASML developing and manufacturing lithography tools is critical.

But the third reason the investment makes sense is that the stock purchase is coupled with a $1B Intel investment in ASML R&D. This $1B infusion is what the whole deal is about, and the investment has one purpose: to speed 450-mm tool development at ASML. For several years now, as talk of 450-mm wafer sizes has heated up to what appears to be a critical mass, ASML has repeatedly said that it can’t do EUV and 450-mm development at the same time. After EUV has succeeded, then ASML will commit to 450-mm tool development. But since the day of reckoning for EUV continues to push out (possibly to 2016 or later), that means lithography, representing 50% of the cost of making a chip, won’t be 450-mm ready nearly in time to meet the (overly optimistic) timetables of the big 450-mm proponents (Intel, Samsung, and TSMC).

So here comes the investment from Intel. While the press release mentioned the importance of both EUV and 450-mm R&D, the only project mentioned for funding was 450-mm tool development. And to be clear, this is not only, or even mostly, EUV 450-mm development. A working 450-mm fab will need 193-immersion tools, 193 dry tools, and maybe 248-nm tools as well, all running at the 450-mm wafer size. If EUV works, a fab will need 450-mm EUV tools as well, but this is the only part of the lithography tool set that is optional for a 450-mm fab. So, in my opinion, the Intel investment is all about the 450-mm wafer size, and has essential nothing to do with EUV lithography.

Why is 450-mm development so important to Intel (and Samsung and TSMC)? My answer to that question next time.

The Resolution Limit of Hard Drive Manufacturing

In lithography, pushing the limits of resolution is what we do. These efforts tend to get a lot of press. After all, the IC technology nodes are named after the smallest nominal dimensions printed with lithography (though the marketing folks who decide whether the next generation will be called the 16-nm or 14-nm node don’t care much about the opinions of lithographers). And the looming end of lithographic scaling has gotten all of us worried – regardless of your faith in EUV. Yes, resolution is the signature (though not the only) accomplishment of lithographers. That is why it is so important to carefully define what we mean by the term ‘resolution’ and understand why it is different for different tasks.

As I have said many times in papers, courses, and my textbook, the resolution one can achieve depends critically on the type of feature one is trying to print. In particular, the nature and limits of resolution are very different for dense patterns as compared to isolated patterns. For the last 10 years or so, the IC industry has been focused almost exclusively on pitch resolution – the smallest possible packing of dense lines and spaces. In optical lithography this resolution depends on the ratio of the wavelength (λ) to the imaging system numerical aperture (NA). For a single, standard lithographic patterning step there is a hard cut-off: the half-pitch will never drop below 0.25λ/NA (i.e., the pre-factor in this equation, called k1, has a lower limit of 0.25).

For 193-nm lithography, the NA has reached its maximum value of 1.35, so that the dense pattern resolution has bottomed out at a pitch of 80 nm. To go lower, one must use double patterning, or wait for Extreme Ultraviolet (EUV) lithography tools to drop the wavelength. Either way is costly, and the proper path past a 40-nm pitch is currently unknown.

But the resolution limit for an isolated feature is not so clear cut. While resolution still scales as λ/NA, there is no hard cut-off for k1. As k1 is lowered, lithography just gets harder. In particular, control of the feature width (called the critical dimension, CD) is harder as k1 goes lower. Thus, for isolated lines, resolution is all about CD control.

And that’s where lithography for hard drive read/write head manufacturing differs from IC manufacturing. When manufacturers like Seagate and Western Digital increase the areal density of their drives, you can bet there was a shrink in the feature size on some critical geometry of the read and write heads. And that feature is an isolated line printed with optical lithography.

So how small are the smallest isolated features printed at Seagate and Western Digital? While I don’t have the exact values, I do know they are on the same order as the smallest features obtained by IC lithography – when double patterning is used. In other words, today’s hard drive manufacturing requires 2x-nm lithography (isolated lines) using single patterning.

The CD control requirements for these critical features is about the same as for IC critical features: +/- 10% or so. Overlay is critical too, but maybe a bit relaxed compared to the standard 1/4 – 1/3 of feature size that is the rule of thumb in the IC world. But there are a few extra requirements that make read/write head litho challenging. The wafers are smaller than the standard 300mm diameter (it is a thick ceramic wafer, not silicon), with no plans for a change to 300 mm. On each wafer, tens of thousands of heads are made (the standard lot size is one to four wafers), so throughput is not quite as critical as for ICs. But this also means that none of the latest generation of lithography tools (such as 193 immersion) are available for this task (they are all 300-mm only tools). Not that these guys would buy an immersion tool anyway – hard disk manufacturing is extremely cost sensitive, so they make do with lower-NA 193 dry tools.

So let’s do the math. To print 2x-nm features with a moderate-NA 193 dry tool, the hard drive makers are doing single-pattern lithography with k1 below 0.1. This is remarkable! The IC lithographers have never attempted such a feat. How is it done? Of course, you use the strongest resolution enhancement techniques from the IC world you can find. After that, it’s all about CD control, which means attention to detail. Let’s give the hard drive folks the credit they deserve: lithography at k1 < 0.1 is hard. Lithography scaling pressures are at least as fierce in the hard drive world as in the IC world, so you can bet the minimum isolated line feature size will continue to shrink. It will be interesting to see how they do it.

Aloha Lithography!

An excuse to travel to Hawaii? You don’t have to ask me twice. Especially if it is the Big Island, my favorite of the Hawaiian isles. My excuse this time? The 3-beams conference, also called triple-beams, EIPBN, or occasionally (rarely) the International Conference on Electron, Ion and Photon Beam Technology & Nanofabrication.

The conference was held last week (May 29 – June 1) at the excessively large Hilton Waikoloa Resort, where if I chose not to take the train or the boat from the lobby to my room, I could make the 15 minute walk instead. With the ocean, a lagoon full of sea turtles, dolphins to wonder over, and too many pools to count, one could easily spend a week’s vacation here without ever leaving the resort – which is no way to spend a vacation on the Big Island.

But I wasn’t here on vacation! I was here on business. OK, the conference was three days and I stayed for eight, but seriously, I was here for the conference. And so I diligently attended papers, ignoring the texts from my wife telling me which pool she was going to next.

Things began on Wednesday with the three plenary talks. Only later did it occur to me that they were of a common theme: optical lithography as the engine of scaling is reaching its nadir, so what will come next? Burn Lin, lithography legend and VP of TSMC, gave his now-familiar pitch for massively parallel e-beam direct write on wafer. His analysis is always insightful, but because development of a practical e-beam solution is still 5 years away (I’m being optimistic here), there was an all-too-common bias in his thinking: the devil we don’t know (e-beam) is better than the devil we do know (EUV). Since Extreme Ultraviolet lithography is at the end of its 20 year development cycle, we know all about the problems that could still kill the program. Since massively parallel e-beam wafer lithography is far behind, it is likely that we haven’t seen the worst problems yet (how bad will overlay be, for example?). And in fact, some problems are the same, such as line-edge roughness limiting the practical sensitivity of any resist system.

Matt Nowak of Qualcomm gave a great review of 3D integration through chip stacking. If Nvidia and Broadcom are right and litho scaling below 22-nm doesn’t yield either better-performing or lower-cost transistors, what is next? Innovations in packaging. While not as sexy as wafer processing, packaging adds a lot to the cost of an IC. And with 3D chip stacking, it is likely that packing costs would go down, system performance would go up, and we even might be able to lower wafer costs by better dividing up functionality between chips. It won’t be long before 3D integration is the new standard of system (chip) integration.

Finally, Mark Pinto of Applied Materials showed a very different example of what to do when silicon scaling begins to fail: go into another market. In this case, the market is silicon photovoltaics (PV). Historically, the PV market’s version of Moore’s Law has shown a 20% decline in cost/Watt for every doubling in installed capacity. That trend seems to be accelerating of late, with commercial installations now running at under $1/W. Grid parity, where the cost of solar electricity equals or is less than the market cost of electricity, has been reached in Hawaii and in several countries (even without accounting for the cost of carbon). The trends all look good, and solar is a good market for Applied.

After the plenary, it was off to the regular papers, with their interesting mix of the practical and the far out. First, an update on what I heard about EUV.

Imec has been running an ASML NXE:3100 for a year now, and its higher throughput means that process development is much easier compared to the days of the old alpha demo tool (ADT). Still, “higher throughput” is a relative term. The most wafers that Imec has run through their 3100 continuously is one lot – 23 wafers – taking about five hours. Thirteen minutes per wafer is a big improvement over several hours per wafer, but still far from adequate.

In the hallways, I heard complaints about $150,000 per EUV mask, and EUV resist at $40K per gallon. Everyone expects these prices to go down when (or if) EUV moves into high volume manufacturing, but anyone who thinks that EUV resists or masks will ever be cheaper than 193 resists or masks just isn’t thinking well. EUV may be Extreme, but it is also Expensive.

There were many excellent papers this year. JSR gave a great talk on some fundamental studies of line-edge roughness (LER) in EUV resists, developing some experimental techniques that were fabulous. A talk from the University of Houston explored the use of small-angle X-ray scattering to measure latent images in chemically amplified resists. Although promising, this techniques will need massive control and characterization to yield quantitative results.

Paul Petric of KLA-Tencor described progress on their e-beam lithography tool, REBL. We still have two years before an alpha tool might be ready to ship to a customer. Richard Blaikie from New Zealand gave a great talk on evanescent interference lithography, though I might be biased in my opinion since I was a co-author.

I had a few hallway conversations with folks about scaling. The economic barrier of double patterning means that pitch has stopped scaling for some levels. Metal 1, in particular, is stuck at an 80-nm pitch (it looks like for three nodes now), the smallest that 193 immersion can print in a single pattern. It seems likely that double patterning will have to be used at Metal 1 for the 14-nm node to bring the pitch down to 64 nm. The fin pitch for finFETs must scale, so self-aligned double patterning (SADP) is being used at the 22-nm node, but what will happen when the double patterning pitch limit of 40 nm is reached? The economics of litho scaling looks very ugly for the next few years, with a very real possibility that we just won’t do it (or maybe no one but Intel will do it).

On the last day of the conference there a slew of good papers on directed self-assembly (DSA), the hottest topic in the lithography world right now. Progress towards practicality is rapid, and universities continue to churn out interesting variations. IBM discussed the possibility of using DSA for fin patterning below 40-nm pitch. They seem very serious about this approach.

Some of my favorite quotes of the week:

Referring to the molten tin sources used for EUV, Jim Thackeray of Dow said “If nature can do volcanos, maybe we can do EUV.”
Referring to EUV resists that can also be used for e-beam lithography, Michael Guillorn of IBM said “In my opinion, this is the best thing we got from the EUV program.”
Referring to problems making the DPG chip at the heart of the REBL system, Paul Petric of KLA-Tencor said “Making tools for making chips is easier than making chips.”

It was a good conference and a fun trip, and now I’m back home, but many of my fellow conference attendees are not. Vivek Bakshi’s EUV workshop was this week in Maui, and next week is the VLSI Technology and Circuits Symposium in Honolulu. I know several folks were able to convince their bosses that a three-week, three-island business trip was required. At the VLSI symposium, one of the evening rump sessions is entitled “Patterning in a non-planar world – EUV, DW or tricky-193?” Patterning is on everyone’s mind now, even chip designers’. So much attention is generally not a good thing. But us lithographers can expect even more attention over the next 12 months, as the industry makes some of the most difficult choices it has ever made in its 50 year history.

Lithography: How Slow Can We Go?

Moore’s Law has always been about economics: if we follow the trend of Moore’s Law, we can reduce the cost per function for our integrated circuits, making chips more powerful for the same cost, or making chips of a given capability cheaper. Historically, cost per function has decreased by about 29% per year, corresponding to a factor of 2 decrease in cost every two years. There are signs that this historic cost reduction trend will slow down. How much of a slowdown can our industry tolerate? If the cost per function is expected to decrease by less than 10% per year going forward, it is unlikely that chipmakers will be willing to invest the massive amounts required for a new generation of fabs. I suspect that the minimum cost per function decrease we can live with is about 15% per year.

What does this say about lithography costs and capabilities per technology node? The cost/function of a chip is the ratio of the cost/area of finished silicon from making the chip and the functions/area that the technology node can deliver. Over the last decade we have been on a 2-year technology shrink schedule, so that the functions/area double every two years. Thus, by keeping the cost/area constant, we have been able to reduce cost/function by 29% per year. If we stay on the same 2-year shrink cycle, a minimum allowed 15% cost/function decrease per year would allow a maximum of 20% increase in the cost/area of silicon each year. Alternately, if we keep the cost/area of silicon constant, we could slow down the 2-year technology node shrink cycle to 4 years between technology nodes, and still get the required 15% reduction in cost/function per year.

Of course, everyone in the semiconductor industry would love to stay on our historic trends: constant cost/area of finished silicon, and a two year cycle of doubling the functions/area. It seems unlikely that this trend can be maintained during the current decade, however. Thus, using a minimum allowed cost/function decrease of 15%/year as a target, we can either allow chipmaking costs/area to increase by 20% each year and stay on the 2-year technology node cycle, or we can allow our technology node cycle to slow to every four years while keeping manufacturing costs/area constant. Either option will allow for continued success, and probably a bit of growth, for the semiconductor industry. But if the technology shrinks come too slowly, or costs rise too quickly, the days of Moore’s Law will be numbered.

Advanced Lithography 2012 – Day 4

As expected, the first EUV session of the last day of the conference filled a large room. It was time to hear the status of EUV tool development, in particular the EUV sources. ASML started things off with a rosy recounting of the successes of 2011. After installing their sixth NXE:3100 preproduction tool, ASML bragged of the 5300 EUV wafers processed at customer sites by these six tools in 2011. I couldn’t help remembering the ASML press release from last month saying a single 193i tool processed 4000 wafers in a day. That, in a nutshell, is the gap between preproduction and high volume manufacturing. They have a long way to go.

The EUV source status reports made future progress to higher power sound inevitable. Today, customers have sources with 9W of power at the intermediate focal plane, a 20W upgrade is being qualified, 50W has been demonstrated, and getting to 100W by the end of the year is straightforward. What could be easier? Somehow, I remain skeptical. Maybe it is because neither source presentation mentioned the damage caused by tin debris – the 5kV shorts or the frequent replacements of $1M collector mirrors – which can only get worse as source power goes up. Maybe it is because the roadmaps made the optimistic assumption that doubling the input laser power would double the EUV source output. Maybe it is because every past source milestone has been missed and it seems likely that future progress will be harder than past progress. Maybe it is because nature does not like EUV.

Or maybe I am biased. I wish the source vendors luck in reaching their goals. They are under a lot of pressure. In contrast, there was frequent mention of significant progress in EUV photoresists. A demonstration of 16 nm lines and spaces looked promising, though the dose was 33 mJ/cm2 (most people are hoping for 20 mJ/cm2 eventually) and the LWR was 3.7 nm, 23% of the nominal CD. This is progress certainly, but I find it very hard to believe that both dose and LWR will be appreciably reduced by next year.

I enjoyed the session on roll-to-roll printing, especially the Rolith presentation on a cylindrical phase-shifting mask with a UV lamp inside. This world of super-high volume patterning on continuous rolls of low-cost substrates is so different from what I think of as lithography that I could do nothing but look on in amazement.

The day ended for me with the last optical lithography session, where Nikon and ASML presented the current status of the latest 193-nm scanners. While single-patterning resolution remains fixed, the rest of the tool is getting better: CD uniformity, overlay and throughput. Under ideal conditions, CD uniformity can be less than 1 nm, single machine overlay can be less than 2 nm, and throughput can be over 220 wafers per hour (with a roadmap to >270 wph). These tools are becoming optimized for double patterning.

My favorite quote of the day: “Math works.” – John Biafore, commenting on a presentation showing a successful simulation prediction.

My least favorite quote: Cymer, talking about their improved internal EUV source testing facilities, said they will “hopefully learn faster than [the chip companies] do.”

And so another SPIE Advanced Lithography symposium is over. Till next year.

Advanced Lithography 2012 – Day 3

I continue to focus on line-edge roughness in my own research. This means that I attended papers in every conference in the symposium, since LER is an issue that cuts across all topics in lithography. (To be truthful, I meant to go to a paper in the new etch conference that talked about LER, but never made it.) LER is finally, in my opinion, getting the attention it deserves. I believe, and say to anyone who will listen, that LER is the ultimate limiter of resolution in optical lithography (e-beam as well). In fact, that was the title of my talk on Wednesday. I think that LER is a core component of Tennant’s Law, that it is killing EUV (in the same way that EUV source power is killing EUV), and that it will limit how far 193-nm lithography can be pushed. And the many difficulties of LER is one reason that directed self-assembly (DSA) so attractive.

Wednesday began for me with another tour-de-force paper by Chris Bencher and coauthors (Applied Materials and IBM) on continued progress on defectivity for DSA. Their work showed that defect inspection and review tools were capable of enabling progress for DSA, and that defect levels, while not zero, are low enough to do serious work on finding and eliminating the defects that are there. This is good news. Many people are scared that DSA defects are somehow thermodynamically inevitable, or that the statistics of DSA defectivity scale in some ugly way. That doesn’t look to be the case. Among other things, Bencher inspected 550 million contact holes on a DSA wafer and found 22 were missing (one of the fears of DSA, as well as for most lithography schemes, is missing contacts). This is a rate that makes finding defects hard, but getting to sufficiently low defect rates probable.

The next step is to get semiconductor-grade block-copolymer materials into the fabs for testing on real processes. And that is starting to happen. Yuriko Seino of Toshiba showed some amazing results of a DSA contact hole shrink process that looked almost ready to be used in manufacturing. Contact holes were printed in a guide material of spin-on carbon (CD = 72 +/- 8 nm, LER = 3.9 nm) on 300-mm wafers. A PMMA-Polystyrene block copolymer was spun on, filling the holes with the self-assemblying polymer (a ring of polystyrene forms along the outside of the contact, with PMMA in the middle). A DUV flood exposure made the PMMA soluble in an organic developer. After development, the DSA holes had a CD of 28.5 +/- 1.4 nm, with an LER (or CER, contact edge roughness) of 0.7 nm. Amazing results – but this is what DSA does. Still to come are electrical via chain yield tests – an essential test of the overall process capability.

An interesting problem that must be tackled before DSA can be used in manufacturing is the impact of this process on design. In the contact hole shrink process (the most likely place DSA will first appear in manufacturing fabs), arbitrary contact holes on arbitrary grids are not possible. Instead, DSA will assembly to produce a specific contact hole size, and will be in the right spot only if those holes are on a proper grid. Both of these issues will significantly impact chip layout. Which is why I was excited to see a paper from Stanford on DSA-aware layout for random logic. With the right design approach, the limited range of contact hole features that can be printed with DSA can be a big advantage.

Unfortunately, on Wednesday I had to reprise my role as self-appointed ethics policeman for papers. A company (that should have known better) gave a paper presenting a new model they had developed. They kept all aspects of how the model worked secret, revealing not even the least detail (for competitive reasons, no doubt). Further, the model was not commercially available – it was for internal use only. As a result, after listening for 20 minutes I could come away from that talk with absolutely nothing. The minimum (and foundational) ethical principle of scientific publication is that sufficient detail be given so that others can reproduce the work. Otherwise, the paper is not a scientific one – it cannot be used to build our shared body of knowledge. I used the question period at the end of the presentation to explain this basic principle to the author.

After another poster session and several beers at KLA-Tencor’s PROLITH party, the third day of Advanced Lithography came to an end. Tomorrow morning will bring the EUV tool and source status review papers. I predict a full house at that session.