Category Archives: Microlithography

Semiconductor Microlithography

Tennant’s Law

It’s hard to make things small. It’s even harder to make things small cheaply.

I was recently re-reading Tim Brunner’s wonderful paper from 2003, “Why optical lithography will live forever” [1] when I was reminded of Tennant’s Law [2,3]. Don Tennant spent 27 years working in lithography-related fields at Bell Labs, and has been running the Cornell NanoScale Science and Technology Facility (CNF) for the last five. In 1999 he plotted up an interesting trend for direct-write-like lithography technologies: There is a power-law relationship between areal throughput (the area of a wafer that can be printed per unit time) and the resolution that can be obtained. Putting resolution (R) in nm and areal throughput (At) in nm2/s, his empirically observed relationship looks like this:

At = 4.3 R^5

Even though the proportionality constant (4.3) represents a snapshot of technology capability circa 1995, this is not a good trend. When cutting the resolution in half (at a given level of technology capability), the throughput decreases by a factor of 32. Yikes. That is not good for manufacturing.

What’s behind Tennant’s Law, and is there any way around it? The first and most obvious problem with direct-write lithography is the pixel problem. Defining one pixel element as the resolution squared, a constant rate of writing pixels will lead to a throughput that goes as R^2. In this scenario, we always get an areal throughput hit when improving resolution just because we are increasing the number of pixels we have to write. Dramatic increases in pixel writing speed must accompany resolution improvement just to keep the throughput constant.

But Tennant’s Law shows us that we don’t keep the pixel writing rate constant. In fact, the pixel throughput (At/R^2) goes as R^3. In other words, writing a small pixel takes much longer than writing a big pixel. Why? While the answer depends on the specific direct-write technology, there are two general reasons. First, the sensitivity of the photoresist goes down as the resolution improves. For electron-beam lithography, higher resolution comes from using a higher energy (at least to a point), since higher-energy electrons exhibit less forward scattering, and thus less blurring within the resist. But higher-energy electrons also transfer less energy to the resist, thus lowering resist sensitivity. The relationship is fundamental: scattering, the mechanism that allows an electron to impart energy to the photoresist, also causes a blurring of the image and a loss of resolution. Thus, reducing the blurring to improve resolution necessarily results in lower sensitivity and thus lower throughput.

(As an aside, higher electron energy results in greater backscattering, so there is a limit to how far resolution can be improved by going to higher energy.)

Chemically amplified (CA) resists have their own throughput versus resolution trade-off. CA resists can be made more sensitive by increasing the amount of baking done after exposure. But this necessarily results in a longer diffusion length of the reactive species (the acid generated by exposure). The greater sensitivity comes from one acid (the result of exposure) diffusing around and finding multiple polymer sites to react with, thus “amplifying” the effects of exposure and improving sensitivity. But increased diffusion worsens resolution – the diffusion length must be kept smaller than the feature size in order to form a feature.

Charged particle beam systems have another throughput/resolution problem: like charges repel. Cranking up the current to get more electrons to the resist faster (that is, increasing the electron flux) crowds the electrons together, increasing the amount of electron-electron repulsion and blurring the resulting image. These space-charge effects ultimately doomed the otherwise intriguing SCALPEL projection e-beam lithography approach [4].

The second reason that smaller pixels require more write time has to do with the greater precision required when writing a small pixel. Since lithography control requirements scale as the feature size (a typical specification for linewidth control is ±10%), one can’t simply write a smaller pixel with the same level of care as a larger one. And it’s hard to be careful and fast at the same time.

One reason why smaller pixels are harder to control is the stochastic effects of exposure: as you decrease the number of electrons (or photons) per pixel, the statistical uncertainty in the number of electrons or photons actually used goes up. The uncertainty produces linewidth errors, most readily observed as linewidth roughness (LWR). To combat the growing uncertainty in smaller pixels, a higher dose is required.

Other throughput limiters can also come into play for direct-write lithography, such as the data rate (one must be able to supply the information as to which pixels are on or off at a rate at least as fast as the pixel writing rate), or stage motion speed. But assuming that these limiters can be swept away with good engineering, Tennant’s Law still leaves us with two important dilemmas: as we improve resolution we are forced to write more pixels, and the time to write each pixel increases.

For proponents of direct-write lithography, the solution to its throughput problems lies with multiple beams. Setting aside the immense engineering challenges involved with controlling hundreds or thousands of beams to a manufacturing level of precision and reliability, does a multiple-beam approach really get us around Tennant’s Law? Not easily. We still have the same two problems. Every IC technology node increases the number of pixels that need to be written by a factor of 2 over the previous node, necessitating a machine with at least twice the number of beams. But since each smaller pixel takes longer to write, the real increase in the number of beams is likely to be much larger (more likely a factor of 4 rather than 2). Even if the economics of multi-beam lithography can be made to work for one technology node, it will look very bad for the next technology node. In other words, writing one pixel at a time does not scale well, even when using multiple beams.

In a future post, I’ll talk about why Tennant’s Law has not been a factor in optical lithography – until now.

[1] T. A. Brunner, “Why optical lithography will live forever”, JVST B 21(6), p. 2632 (2003).
[2] Donald M. Tennant, Chapter 4, “Limits of Conventional Lithography”, in Nanotechnology, Gregory Timp Ed., Springer (1999) p. 164.
[3] Not to be confused with Roy Tennant’s Law of Library Science: “Only librarians like to search, everyone else likes to find.”
[4] J.A. Liddle, et al., “Space-charge effects in projection electron-beam lithography: Results from the SCALPEL proof-of-lithography system”, JVST B 19(2), p. 476 (2001).

A view from the top (20)

It is an article of faith among semiconductor industry watchers that the last 20 years have seen considerable consolidation among semiconductor makers, with further consolidation all but inevitable. Of course, we can all point to mergers (TI and National being the latest) and players exiting from the market (NEC was the #1 chipmaker in the world in 1991, but now is out of the business). But does the data support this view of rampant consolidation?

I’ve been looking over 24 years of annual top 20 semiconductor company revenue data compiled by Gartner Dataquest (1987 – 1999) and iSupply (2000 – 2010), and the results show a more nuanced picture. As I noted in my last post on this topic, foundries are excluded from this accounting – their revenue is attributed to the companies placing the orders. Thus, this is a semiconductor product-based top-20 list, not a semiconductor fab-based top-20 list. With that in mind, let’s look at the trends.

Consider first, the fraction of the total semiconductor market controlled by the top 20 semiconductor companies. The trendline shows a 15% drop in market share over 24 years for the top 20, or about a 0.7% decline on average each year. In other words, the rest of the semiconductor companies (those not in the top 20) saw their market share grow dramatically, from 23% to 38% or so.

Semiconductor Top 20 Market Share

Likewise, the top 10 semiconductor companies saw their market share drop by ten points, from about 56% to 46% (or about 0.45% per year). The top five companies, on the other hand, kept about a constant share of 1/3 of the market since 1987. The trendline has a slope not significantly different from zero (-0.1% per year).

Semiconductor Top 5 Market Share

But it’s the top two semiconductor makers that show the most interesting trend. The top 2 have seen a 6% rise in their market share, to 22% today, for an increase of about 0.3% per year. The top three makers have seen a more modest 0.15% increase in market share per year since 1987. Thus, consolidation of market share has only come at the very top of the market, the top 2 to be specific. For the rest of the industry, there has be spreading out of the market among more players. Those top 2 players are now, of course, Intel and Samsung. But in 1987 they were NEC and Toshiba (Intel was #10 then, and Samsung wasn’t on the list).

Semiconductor Top 2 Market Share

So is the megatrend of semiconductor industry consolidation a myth? Yes and no. From a product perspective, the data is clear. The top two companies have grown in dominance, but for the remaining 80% of the market or so revenue is being spread over a wider array of companies over time. Foundries can be given some credit for the increased democratization of the market, but the trends were in place before foundries even came into existence. In fact, it is more accurate to say that foundries are a result rather than a cause of this democratization. It is the nature of the semiconductor product itself which has driven this increase in the long tail of the distribution of companies.

While there have always been a few blockbuster product categories (memory and microprocessors) where size matters, the vast majority of semiconductor revenue comes from niche (or at least small market share) products. Big companies don’t excel at making lots of niche products. Thus, small to medium-sized companies who stay close to their customers are able to compete well against their larger rivals. It is likely that this trend will continue so long as Moore’s Law continues.

Moore’s Law keeps the few big players still able to invest in new fabs quite busy, and they need big market categories to justify their big investments. There has been considerable consolidation in the industry if you consider fabs rather than products, since there are now only about five companies that are likely to stay at the front of Moore’s Law over the next few years. And these top five manufacturers have seen growth in their share of fab output. But I doubt that a smaller number of fabs competing at the very high-end of the market will somehow reverse the trend of dispersion for the other 80% of the market. That is, until Moore’s Law ends. Then, these big companies with their big fabs are likely to turn their attentions to markets that seemed to diffuse to worry about. What happens then, in a post-Moore’s Law world, is anyone’s guess.

The top 20 ain’t what it used to be

Looking back on data of the annual top 20 semiconductor companies since 1987, it’s amazing how much has changed. In my last post on this topic, I looked at all the companies that went bankrupt, spun-out, or merged their way into or out of the top 20 list. Change is definitely a constant in this field. Now, let’s look at the makeup of the 2010 list of top semiconductor companies. Here is the list, as generated by iSuppli.

1 Intel Corporation
2 Samsung Electronics
3 Toshiba Semiconductor
4 Texas Instruments
5 Renesas Electronics
6 Hynix
7 STMicroelectronics
8 Micron Technology
9 Qualcomm
10 Broadcom
11 Elpida Memory
12 Advanced Micro Devices
13 Infineon Technologies
14 Sony
15 Panasonic Corporation
16 Freescale Semiconductor
17 NXP
18 Marvell Technology Group
19 MediaTek
20 NVIDIA

It’s important to note that foundries are excluded from this accounting – their revenue is attributed to the companies placing the orders. Thus, this is a semiconductor product-based top-20 list, not a semiconductor maker-based top-20 list.

And that distinction is obvious when looking at the make-up of the 2010 top-20. Six of the top 20 companies are fabless. Another seven are “fab-lite”, meaning they have stopped investing in new fabs or leading-edge manufacturing. That leaves just seven leading-edge semiconductor manufacturers in the top 20. Of those, four make mostly memory (80% of Samsung’s revenue came from memory), two make mostly logic, and one (Toshiba) makes a fair amount of both.

As a point of reference, if TSMC’s revenue were attributed to TSMC rather than their customers, they would be in fourth place, just barely behind Toshiba. The next two largest foundries, UMC and GlobalFoundries, would find themselves near the bottom of the top 20.

So, we have seven semiconductor manufacturers and three foundries that claim to still want to invest in leading-edge manufacturing capacity. That’s a far cry from just 10 years ago, when all 20 of the top 20 semiconductor companies were committed to building new leading-edge fabs. And even this list of 10 companies can’t really afford to play at the bleeding edge. Only five of them (Intel, Samsung, Toshiba, TSMC, and Hynix) have over $10B/year in semiconductor revenue, probably the minimum needed to build that next $5B mega fab. Add EUV and 450mm wafers into the mix, and you can see that there will be very few players at this ultra-high end of manufacturing.

It is conventional wisdom that the last decade has been one of extreme consolidation in the semiconductor business. Next, I’ll look at the numbers to see how well that conventional wisdom holds up.

What to do with an old lithography tool?

So you’ve got an old lithography tool hanging around. It doesn’t have the resolution (or any other spec) needed for production of pretty much anything that anyone wants to make. What can you do with it?

One option is to sell it to a Hollywood prop house. Apparently, that is what someone did with an old Cobilt mask aligner (at least, I think it is a Cobilt). It has probably shown up in several movies, but the one I saw it in was Silent Running, a good but not great sci-fi movie from 1972. Here are some shots from the movie.

Cobilt Aligner in Silent Running
Bruce Dern as Freeman Lowell limping past the mask aligner after murdering his crewmates.

Cobilt Aligner in Silent Running

Cobilt Aligner in Silent Running
Lowell using the mask aligner to reprogram the company droids to answer to him.

You can’t keep a good lithography tool down, not if you have a little imagination.

Is EUV the SST of Lithography?

Analogies with Moore’s Law abound. Virtually any trend looks linear on a log-linear plot if the time period is short enough. Some people hopefully compare their industry’s recent history to Moore’s Law, wishfully predicting future success with the air of inevitability that is usually attached to Moore’s Law. Others look to some past trend in the hopes of understanding the future of Moore’s Law. A common analogy of the latter sort is the trend of airplane speed in the last century.

Airspeed Trend

Plotting the cruising speed of new planes against their first year of commercial use, the trend from the 1910s to the 1950s was linear on a log-scale, just like a Moore’s Law plot. But then something different happens. As airspeed approaches the speed of sound, the trend levels off – a physical limit changed the economics of air travel. The equivalent of Moore’s Law for air travel had ended.

For me, the interesting data point is the Concord Supersonic Transport (SST). First flown commercially in 1976, the Mach 2 jet was perfectly in line with the historical log-speed trend of the first 50 years of the industry. And the SST was a technical success – it did everything that was expected of it. Except, of course, make money. The economic limit had been reached, but that didn’t stop many bright people from insisting that the trend must continue, spending billions to make it so. But technological invention couldn’t change the economic picture, and supersonic transportation never caught on.

So here goes my analogy. I think extreme ultraviolet (EUV) will be the SST of lithography. I have little doubt that the technology can be made to work. If it fails (I hope it won’t, but I think it will), the failure will be economic. Like the SST, EUV lithography will never be economical to operate in a mass (manufacturing) market. We can do it, but that doesn’t mean we should.

Of course, this analogy is imperfect, as all such analogies are. Air travel went through just three doublings of speed in 50 years, as opposed to the 36 doublings of transistor count per chip in the last 50 years of semiconductor manufacturing. And the economics of the industries are hardly the same. Still, the analogy has enough weight to make one think. We’ll know soon enough – EUV lithography will likely succeed or fail in the next two years.

As an aside, the first time I heard someone mention the analogy between airspeed and transistor trends was in the early 1990s, when Richard Freeman of AT&T gave a talk. The subject of his presentation? Soft x-ray lithography, what we now call EUV.

Frits Zernike, Jr., 1931 – 2011

Lithography lost one of its own on July 12 with the death of Frits Zernike Jr. to Parkinson’s disease. Here is his obit from the New York Times:

Born and educated in Groningen, the Netherlands. A physicist with Perkin-Elmer Corp., Silicon Valley Group and Carl Zeiss, and first manager for Dept. of Energy’s Extreme Ultraviolet Lithography Program. Survived by his wife of 49 years, Barbara Backus Zernike, children Frits III, Harry, and Kate, daughter- and son-in-law Jennifer Wu and Jonathan Schwartz, and three grandchildren: Frits and Nicolaas Schwartz and Anders Zernike. Memorial service will be 3pm Thursday, July 28, at Essex Yacht Club, Novelty Lane, Essex, CT. Donations in his memory may be made to Dance for Parkinson’s, c/o NMS, 100 Audubon St, New Haven, CT 06510, or Community Music School, P.O. Box 387, Centerbrook, CT 06409.

Here is an excerpt from a post I made to this blog on February 27, 2009 concerning Frits:

“It was seven years ago that SPIE approached me with the idea of creating a major SPIE award in microlithography. I agreed to head up the effort, and gathered together a committee of other lithographers to establish the award process. Someone on the committee suggested naming the award after Frits Zernike, for three reasons. First, no major optical award had been named in his honor, even though the scientific contributions of this Nobel prize winner are legion. Second, the name has high recognition in the optical lithography community due to the ubiquitous use of the Zernike polynomial for describing lens aberrations. The third reason is more personal – Zernike’s son, Frits Zernike Jr., worked for many years in the field of lithography at Perkin-Elmer and later SVG Lithography before retiring. Some of us on the committee knew him, and when contacted he was very supportive of an award named for his father.”

Litho in Las Vegas

The 3-beam conference here in Las Vegas began on Wednesday morning with the plenary session. Nick Economou discussed the history and current performance of the Helium Ion Microscope. What an amazing tool! It has much higher resolution than a scanning electron microscope (SEM) with far less charging. The result is truly amazing pictures of biological and other non-conducting samples. I can’t wait to see pictures of photoresist patterns with this tool – I’m sure it will quickly become indispensible, especially for line-edge roughness characterization.

Sam Sivakumar of Intel seems to be making a second career out of giving plenary talks (proof of the never-ending interest in hearing about what Intel is going to do next). His talk brought up a long-simmering (or at least recently-simmering) question that I have. Standard naming convention for semiconductor technology nodes cuts the name of the node in half for two generations out. Thus the 90-nm and the 65-nm nodes become the 45-nm and 32-nm nodes (sometimes rounding is necessary). Of course, these names have nothing to do with the dimensions of the features involved in the process, but the standard of dividing by two for the names has seemed inviolate. Today most state-of-the-art companies claim to be manufacturing at the 32-nm node. That means two nodes out would be the 16-nm node, right?

So I didn’t know what to think when Intel began calling it the 15-nm node. Why? Are they hoping for a 1-nm marketing advantage over their rivals? If they don’t get to the node first, will they say “Yes, but they are only doing 16-nm, but WE are doing 15”? A 1-nm advantage seems insufficiently significant, and now it seems that the marketing gurus at Intel agree. While the program listed Sam’s talk as having “15nm Node” in the title, his opening slide had changed the title to “14nm Node”. Now Intel will have a 2-nm advantage over the rest of us. That’s real progress.

Sam provided a couple of quotable moments in his talk: “Traditional scaling approaches will no longer work.” “Fundamental work is needed in LWR to affect improvement.” I agree.

Matt Malloy of SEMATECH gave an interesting talk on the sources of defects for nanoimprint lithography (of the Molecular Imprints step-and-flash variety). This is an important topic since defect density is the only serious roadblock to implementing nanoimprint in production. I was surprised to learn that the vast majority of defects come from the template manufacturing process. At least we know where to focus our attention now.

I was happy to hear from Dan Sanders of IBM Almaden Research that directed self assembly (DSA) has moved past the “trough of disillusionment” in the Hype Cycle and is now entering the “slope of enlightenment”. Progress on DSA in the last year has been remarkable, and I expect that progress to accelerate in the next year. This is a research area to get behind.

David Melville of IBM gave an invited talk on computational lithography. This quote was right on: “Effective optimization [of the total lithography process] is no longer in the realm of the lithography engineer.” Serious mathematicians and computational geeks are needed as well. What a different world from when I started computing lithography on my PC so many years ago.

A cool idea that I am still learning about is “Absorbance Modulation” materials. Essentially, they are like the old idea of contrast enhancement materials, but made erasable using a second wavelength of light (one that the underlying resist is not sensitive to). There are many variations on how such a material can be used to improve resolution, but the real goal would be to perform double patterning with just a double exposure process. Alas, no absorbance modulation materials are yet available at 193 nm.

On the last day of the conference I gave my paper – a work completed that morning and something completely different from what I had originally proposed in my abstract. That’s life on the (rough) edge of research.

The Mapper folks had a couple of talks promising a 1 wafer-per-hour maskless e-beam lithography tool by the middle of next year. If they succeed, that tool could be a game changer. I’ll be staying tuned, but the challenges remain great.

Finally, at the end of the day Alex Liddle of NIST had a fascinating talk on measuring acid blur in chemically amplified resists using single molecule fluorescence. Cool stuff, though more work is needed.

Another interesting 3-beams conference is over, and I can hardly wait for next year’s conference. I doesn’t hurt that it will be on the Big Island of Hawaii in 2012.

Aside: Thanks to Richard Blaikie for exposing me to this quote from Albert Einstein: “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage – to move in the opposite direction.” This could be the motto of lithographers everywhere.

Litho in Las Vegas – Prologue

Las Vegas is not my favorite city. It is America’s monument to greed (and bad taste), where form not only wins over substance, it’s as if substance never even showed up for the race. This place relishes in its lack of roots, tearing down old facades to build newer, bigger facades (little is more pathetic than faded glitz) in an arms race of extravagance. It is all so purposely disorienting.

So why am I here? It is time for the 55th International Conference on Electron, Ion and Photon Beam Technology & Nanofabrication (EIPBN). That’s a mouthful, which is why attendees universally call it the triple-beam or three-beams conference. Fortunately, the conference is in a resort near the mountains outside of town. Still, even this place will not let you escape the Las Vegas vibes – you can’t get anywhere in the resort without walking through the smoke-filled Casino that fills its core. Ah well.

I don’t attend this conference every year, but I wish that I could. It is generally academic, with papers that are a shotgun blast of ideas ranging from cool to bizarre. I always come away inspired and with new things to think about and work on. That will be my way of judging success this year as well.

The conference will begin with a plenary session, but the festivities have already started with the traditional Tuesday evening welcome reception, this time including an Elvis impersonator. Welcome to Las Vegas.

Steven A. Orszag, 1943 – 2011

Dr. Steven A. Orszag, a renowned expert in computational fluid mechanics, died on May 1 at the age of 68. (His obituary in the New York Times can be accessed here.) One of his most important contributions was the development of spectral methods for solving complex fluid dynamics problems, greatly increasing the efficiency of the numerical calculations. These techniques are now standard in fluid dynamics, especially for turbulent flow, but are also used in a number of other applications of scientific computation.

It is one of those other topics that caused me to meet Steven. Dr. Orszag had a long collaboration with Dr. Eytan Barouch, of Clarkson and then Boston Universities. Eytan got involved in lithography simulation in the late 1980s (I worked with him quite a bit during those early years) and applied Orszag’s spectral methods to aerial image simulation problems. Eventually Barouch and Orszag formed Vector Technologies to market their lithography simulator FAIM. Orszag’s involvement was mostly advisory and on the technical side, so far as I could tell. In the 1990s Dr. Orszag was the coauthor of 16 SPIE proceedings papers on lithography simulation, and I met him a time or two at these conferences. Obviously bright and busy, it was clear to me that lithography was more of a hobby to Dr. Orszag, an interesting offshoot of his many scientific interests.

Steven Orszag also has a famous son – Peter Orszag, formerly the budget director for the Obama administration.

SPIE Advanced Lithography Symposium 2011 – day 4

This week in San Jose began cold, but warmed up by Thursday to the kind of weather we all expect from California. So too with the conference, and I think Thursday had some of the most interesting, and surprising, presentations.

The day began for me with the much-anticipated presentation by ASML on the NXE:3100 extreme ultraviolet (EUV) “pre-production” lithography scanner. As expected, it was by parts marketing pitch, pep talk, and soothing reassurance that everything is under control. The first of six NXE:3100s shipped to Samsung last year, printing its first wafers in December. The second 3100 is being installed now at Imec in Belgium. The other systems will roll out in about two month intervals. Unfortunately, there was no official word (or data) from the system as running at Samsung, but ASML provided a good overview of the performance of the systems at the ASML factory.

You have to give ASML a lot of credit – they know how to build a good tool. The lens quality, resolution, defectivity, and overlay performance was as good as anyone could expect as this point. The “tool flare” was down to 5%, but be careful – the total flare seen at the wafer is the tool flare plus flare caused by the mask and REMA masking blades. This total flare is chip-layout dependent, and was as high as 12% for a Flash chip example they showed (you had to be very close to the screen to see the small “12” in the legend of the graph, but at least it was there).

The performance of the tool was very good, except for two problems. The linewidth roughness (LWR) of all their images was very bad, though not a single LWR measurement was shown in the presentation. But it was the throughput that everyone was most interested in hearing about, and that number was 5 wafers per hour. Of course, that is not a “production” throughput number, since it assumed a 10 mJ/cm2 resist and didn’t expose any edge fields, but it’s still a benchmark number to compare to. It’s better than I thought it would be, but still a factor of 12 from the tool spec of 60 wafers per hour. ASML sought to reassure the skeptical members of the audience by renaming their roadmap for source power an “upgrade path” instead.

As anyone who has known me for a while already knows, I am a skeptic of the viability of EUV lithography for IC manufacturing. It’s not that EUV can’t work, it’s just that the effort required to make it work doesn’t line up with the timing and cost needs of chip manufacturers. When serious work first started on EUV lithography in the mid 1990s, the target insertion into manufacturing was the 130-nm node. Since then, the target has slipped by at least two years for every three years of effort. Today, Intel talks about inserting EUV into manufacturing at their 10-nm node four years from now. The result: tool development has been shooting at a moving target, which is almost always a recipe for disaster.

The 10-nm node for logic means a 20-nm or 22-nm half-pitch, which puts the k1 factor for the half-pitch below 0.5 on the (newly increased) 0.33 NA production tool. This means off-axis illumination will likely be required, and it will be difficult to extend the tool to the next node. Mask blank and patterned mask defectivity is still an unsolved problem, and thanks to a lack of appropriate mask inspection tools we “don’t know what we don’t know” in terms of how bad the problem is. Cost, of course, is just as critical as performance, and a $100M tool will need at least 100 wafers per hour in production throughput (spec’ed throughput much higher) to be viable. The effort required to get beyond 100 wafers per hour is huge, especially since the exposure dose constraints that LWR will put on the resist are not likely to be overcome. We have no roadmap, let alone on upgrade path, for reducing LWR to 2 nm.

And so the final push is on. It will be an all-out effort by the industry for the next 12 – 24 months to try to make EUV lithography work. But ASML has 10 production EUV tool orders in their hands. How did they manage that, given the uncertainty involved and the fact that the preproduction tool has yet to be evaluated? As one chip maker told me, ASML is very good at “twisting arms”. Another chip maker said they had no choice but to “play the game”. After all, ASML controls the spigot on 193-nm immersion tools. So the orders are in, and the industry is sharing the risk with ASML (probably not a bad thing). If this year at the SPIE Advanced Lithography Symposium was interesting, next year promises to be even more so.

To make it clear, I am a skeptic, but I would be happy if EUV lithography was successful. I’m doing my part by trying to understand the fundamentals of LWR. Regardless of the outcome, the EUV effort is fun science and engineering! I hope we will continue to work on the hard problems of EUV in the cold light of reason.

The most pleasantly surprising aspect of this year’s symposium was the variety and quality of work presented at the Alternate Lithographic Technologies conference. Now that EUV has been separated out as its own conference, the Alternate Lithography conference has been able to flourish with exciting presentations on nanoimprint, directed self-assembly, interferometric lithography, and many other innovations. The University of Wisconsin had a great talk on modeling self-assembly. Virginia Tech surprised me with a novel (and potentially revolutionary) approach to double patterning as a non-linear double exposure. And it is always fun to think about the bizarre behavior of evanescent waves, inspired by a very good talk from the University of Canterbury (Christchurch, New Zealand).

And now I’m going home, where I hope to catch up on the sleep I’ve lost in the last week. Am I getting too old for life in the fast lane of advanced lithography?