Category Archives: Microlithography

Semiconductor Microlithography

Advanced Lithography 2012 – Day 2

There is no place I’d rather be on Valentine’s Day than in San Jose surrounded by my friends and colleagues in lithography. No wait, I didn’t mean that. I miss my wife and two young daughters. I don’t like traveling without them.

While Valentine’s Day is the Hallmark holiday I despise the most, it does serve to remind me of the conflicted feelings of most business travelers who have families. Over the years I have missed holidays and birthdays and uncountable little things in the lives of the people I love most. I have also been to interesting and exotic places, met great people (many of whom have become lifelong friends), and worked on fun and intellectually satisfying projects. Mostly, I’ve been able to keep these things in balance during the various phases of my life, and for that I am grateful. While popular culture celebrates those who live their lives in the extreme, the wise know that happiness and success is about balance.

But there is no balance this week. This week is non-stop, metal to the floor, take no prisoners lithography. Tuesday began with papers at 8am and the last panel ended at 9pm, with for me a lunch meeting and poster session in lieu of dinner thrown in as well. I ran constantly from session to session (trying not miss the most interesting papers), and constantly ran into colleagues I see only once each year (trying to remember their names while not looking down at their badges). I ended the day with a shot of Jameson’s at Original Joe’s, tired but satisfied.

I also had two papers, one oral and one poster. Luckily, I had them both prepared well in advance. (People that know me are laughing out loud right now.) In truth, the stress and adrenaline of just-in-time presenting makes this conference even more exciting, though I swear every year that this year will be the last time I am so disorganized. Now if only I could finish my talk for tomorrow…

I saw many interesting papers of the solid, incremental advancement type – the lifeblood of this conference. I criticized a few of them, mostly for failing to learn the lessons I’ve already learned and repeating the mistakes other authors have already made. Nobody can read and absorb the entire literature and history of an industry, which is why the format of conference presentation is so valuable. The communication and teaching is two-way. You tell the audience what you have done and learned in the hopes of teaching them, and they give you feedback as to how that fits within the community’s vast knowledge base. The bigger and more diverse the audience, the better. But make no mistake, baring your technical soul for inspection is a scary thing, especially for the many young folks presenting here for the first time. I congratulate each author for their mettle – success is in the doing.

My sense of the mood at the conference is one of disappointment with the progress of EUV lithography. Roadmaps are slipping because of source power. Progress in line-edge roughness reduction is almost nonexistent. The major ASML papers on EUV progress are yet to come.

While everyone is excited about directed self-assembly (there are 55 DSA talks this year, compared to 20 last year), there are still many unknowns. I suspect, however, that a first application of DSA is emerging that could jumpstart its transition from lab to fab: contact hole shrinking. After exposing the contact holes to be bigger than we want them, DSA polymers coat the inside of the hole, both shrinking them and healing most of their roughness. A neat trick. While this approach does nothing to improve contact hole pitch, it looks like an important and valuable tool for printing one of the most difficult lithography layers.

My favorite quote of the day: “What does not change in lithography is change.” – Tatsuhiko Higashiki, Toshiba. My favorite new acronym: InStED Lithography (Interference Stimulated Emission Deactivation Lithography) – John Petersen.

Advanced Lithography 2012 – Day 1

Attendance at this year’s Advanced Lithography Symposium is up 10% this year, to over 1500, though we still haven’t recovered from the huge drop in numbers that accompanied the economic collapse in 2008. Still, the mood here is good. When I ask people how they are doing the answer is almost universally the same: busy. And busy is what we will all be this week, trying to navigate the seven conferences (six in parallel), 12 short courses, three panel discussions, multiple company-sponsored technical forums and hospitality suites, and of course the numerous side meetings, customer dinners, and hallway encounters that give AL its social dimension and where much of the real work of information transfer occurs.

For me the conference started out with my short course (informally titled “The World According to NILS”). The students were especially enthusiastic, which always gives me a great energy boost to start the week. But if I hadn’t been teaching, I’m sure I would have been attending the new short course on directed self-assembly (DSA). It was by far the most popular course this year.

On Monday the conference began with the Plenary Session. John Bruning won the Frits Zernike Award for Microlithography, though he wisely chose not to change his vacation plans in the Caribbean with his wife in order to accept the award. Burn Lin was recognized for his ten years of service as the founding editor-in-chief of the Journal of Micro/Nanolithography, MEMS, and MOEMS (JM3). Since I have taken over from Burn as the new editor-in-chief, the well-deserved praise and recognition that he received only made it more obvious how big the shoes are that I must fill. We also welcomed four new SPIE fellows into our ranks: Patrick Naulleau, Andy Neureuther, Vivek Singh, and Yu-Cheng Lin. Congratulations to all of them.

The three plenary talks were all very good. Jim Clifford, operations VP at Qualcomm, made sure we all understood how much our children (and grandchildren) would be addicted to wireless devices, and how they needed continuation of Moore’s Law to make that happen. His message was “If you build it, we will come.” But with a caveat. The first slide of his talk had only one word: COST. Lest we think that Moore’s Law meant anything different, he assured us that it means lowering the cost per function over time. A more powerful chip that doesn’t have lower cost per function is simply not interesting to Qualcomm. How much lower? I asked that question and got a straight answer. Historically, our industry has achieved 29% reduction in transistor cost each year. Clifford thought that cost reduction below the “low double digits” would not be worth the investment. So, Moore’s Law can slow somewhat, maybe even by a factor of two, but if it slows any more than that it will be dead. Clifford ended the talk by encouraging us lithographers to work hard: “I want to tell you how important you are to my retirement.”

Grant Willson had a great plenary talk full of poetry and insight. Chris Progler of Photronics kept us informed and entertained as he inundated us with data and conclusively proved that squares don’t make good Frisbees.

The crowds are always the biggest on the first day, since people have yet to burn out from technical information overload. I had to watch the first EUV papers in the overflow room. There I learned about ASML’s progress on getting the throughput up on the NXE:3100 preproduction EUV tool. Last year, shortly after installing the first 3100 at Samsung, ASML announced that the throughput would be a disappointing 5-6 wafers per hour (the spec was 60). One year later, ASML showed that the actual throughput was now 4 wafers per hour. Not exactly the progress we had been hoping for. Why the backslide? The quoted “6 wph” was based on a mythical “10 mJ resist” (the throughput numbers for the NXE:3300 will be based on a 15 mJ resist). Such a resist does not (and I’m sure will not) exist. The actual 4 wph was based on a “usable dose”, though the results did not produce acceptable linewidth roughness (LWR), so there is some doubt on just how usable that dose really is.

Burn Lin also had a standing room only overflow crowd for his talk on multiple-electron-beam lithography (we are very interested in both EUV and alternates to EUV). He made the REBL group at KLA-Tencor very happy with the bold proposal that we should make every layer on 450-mm wafer devices using e-beam lithography, and in particular with REBL (reflective electron-beam lithography). His analysis was good, but made some very important assumptions: REBL will perform to specification, be delivered on-time, and at the currently estimated price. If that happens it will be a first for an NGL technology.

I heard some good talks by Moshe Preil and Jim Thackeray, some poor talks by a few others, and the week of technical papers has begun. Now if I can only finish my talk in time to give it tomorrow…

Monday is always the most quotable day of the symposium. Here are some of my favorites:

“EUV is like a trip to Disneyland.” – Jim Clifford
“I’ll retire when I expire.” – Grant Willson
“EUVL is needed in 2004 or sooner.” – Peter Silverman of Intel, in a talk from 2000 (as quoted by Grant Willson)
“Nothing fails this year’s technology faster than aiming for last year’s targets.” – Moshe Preil

Advanced Lithography 2012 – A Prologue

Yesterday I found my way to San Jose (a more arduous journey than in the past, since all direct flights from Austin to San Jose have disappeared like civility in American politics). Another SPIE Advanced Lithography Conference is about to begin. As usual, I will blog each day from my vantage as an overwhelmed conference participant. And also as usual, I will set the stage for what I think will be the highlights of the week in this prologue. I hope, of course, that I am wrong – that will mean that I don’t know what will be important and a surprise is in store. Surprises are the best thing about this conference. And I have never been bored here yet.

Let’s begin with the obvious topic: EUV lithography. I believe that 2012 will be the make or break year for EUVL. I’ve said that before. In 2010 and 2011, in fact. I continue to be amazed at how willing customers are to live with missed specs and slipped deadlines. I guess that’s what happens when you have no alternatives. But this time I mean it: 2012 is the make or break year for EUV. And of course, all eyes are on ASML and their source suppliers.

Last year ASML shipped 6 NXE:3100 “pre-production” EUV tools (actually, the first one was in 2010), at an estimated $120M each. While spec’ed at 60 wafers per hour throughput, they delivered 6 wph. An upgrade of the source by Cymer to bring that close to 20 wph has been delayed. Meanwhile, the production tool NXE:3300 is supposedly still on schedule for delivery in the second half of this year. But wait: the spec on throughput for the 3300 has changed. It is now 69 wph, down from an original 125 wph, which is down from even higher expectations. The higher 125 number will come later, we are told, with an upgrade to the source. It’s all about managing expectations. And twisting arms. Did I mention there are no alternatives?

But expectations are not the only thing that matters. Eventually, real throughput on real product will matter. Which brings up an interesting question – one that I hope to gain more insight on this week. How high does “high” have to be in High Volume Manufacturing (HVM)? What’s the lowest actual production throughput that customers can live with and still think EUV was worth the commitment? The fabs aren’t talking. Understandably, they don’t want to give ASML the lowest number, since that will take the pressure off them to do better. And different customers will have very different answers, I’m sure. Will Intel buy 10 EUV tools if those tools can only deliver 40 wph in production?

A related but more technical question is also on my mind: How much line-edge roughness (LER) can devices tolerate at the 14-nm node and below? This question is related to throughput because the easy way (and maybe the only way) to reduce LER is to increase exposure dose. And in source-limited technologies like EUV and electron-beam lithography, throughput is mostly determined by the resist dose requirement. (See my previous posts on Tennant’s Law.) So, as always, I’ll be focusing on the LER papers this week, hoping to gain enough insight to say I actually understand LER, what causes it, and how small it can be made.

As I’ve said before, LER is the ultimate limiter of resolution. Unless, that is, we break the LER paradigm of our current exposure and resist approaches. One way to do that is with directed self-assembly (DSA). This year’s hot topic will, no doubt, be DSA. The technology has shown enough promise that it has gotten the industry excited, and there has been a lot of activity in the last year. Soon, however, and maybe this week, reality will set in. Getting DSA to work in production will take an enormous effort.

I’ve seen these cycles before: A promising new idea gets people excited. There is potential to solve a nagging industry problem, or enable a future generation of products. After the early adopters report on their progress (and those reports are always glowing), the early followers jump in and start working out the details. Then they come to this conference and start reporting on the problems they are having. Solutions to those problems are proposed and people get back to work. But do the solutions come fast enough, or does the excitement wane? If the problems pile up too fast, people looking for a quick fix give up. A few diehards labor on. Progress is slow, coupled with complaints that EUV is getting all the resources. Will the new idea survive these travails, eventually become a “plan of record” at enough fabs to be self-supporting? The answer will depend on the difficulty of the problem and the grit and wits of the diehards.

Does this sound familiar? It could describe sidewall-spacer double patterning (made it), or litho-freeze-litho-freeze double patterning (hasn’t made it), or model-based OPC (made it), or imprint (hasn’t made it). And it will describe DSA, though we are a few years away from knowing the outcome.

What else will I be watching for this week? Ah yes, the surprises. Hopefully, I won’t be in the wrong session when they occur.

Tennant’s Law, Part 2

In the first part of this article, I talked about the empirically determined Tennant’s Law: the areal throughput (At) of a direct-write lithography system is proportional to the resolution (R) to the fifth power. In mathematical terms,

At = kT*R^5

where kT is Tennant’s constant, and was equal to about 4.3 nm-3 s-1 in 1995 according to the data Don Tennant collected [1]. The power of 5 comes from two sources: (1) areal throughput is equal to the pixel throughput times R^2, and (2) pixel throughput is proportional to the volume of a voxel (a three-dimensional pixel), R^3. The first part is a simple geometrical consideration: for a given time required to write one pixel, doubling the number of pixels doubles the write time. It’s the second part that fascinates me: the time to write a voxel is inversely proportional to the volume of the voxel. It takes care to write something small, and it’s hard to be careful and fast at the same time.

The implication for direct write is clear: the economics of writing small features is very bad. Granted, Tennant’s constant increases over time as our technology improves, but it has never increased nearly fast enough for high resolution direct-write lithography to make up for the R^5 deficit.

But does Tennant’s Law apply to optical lithography? Yes, and no. Unlike direct-write lithography, in optical lithography we write a massive number of pixels at once: parallel processing versus serial processing. That makes Tennant’s constant very large (and that’s a good thing), but is the scaling any different?

For a given level of technology, the number of pixels that can fit into an optical field of a projection lens is roughly constant. Thus, a lower-resolution lens with a large field size can be just as difficult to make as a higher-resolution lens with a small field size if the number of pixels in the lens field is the same. That would give an R^2 dependence to the areal throughput, just like for direct write (though, again, Tennant’s constant will be much larger for projection printing).

But is there a further R^3 dependence to printing small pixels for optical projection printing, just as in electron-beam and other direct-write technologies? Historically, the answer has been no. Thanks to significant effort by lithography tool companies (and considerable financial incentives as well), the highest resolution tools also tend to have the fastest stages, wafer handling, and alignment systems. And historically, light sources have been bright, so that resist sensitivity (especially for chemically amplified resists) has not been a fundamental limit to throughput.

But all that is changing as the lithography community looks to Extreme UV (EUV) lithography. EUV lithography has the highest resolution, but it is slow. Painfully slow, in fact. Our EUV sources are not bright, so resist sensitivity limits throughput. And resist sensitivity for EUV, as it turns out, is a function of resolution.

For some time now, researchers in the world of EUV lithography have been talking about the “RLS trade-off”: the unfortunate constraint that it is hard to get simultaneously high resolution (R), low line-edge roughness (L), and good resist sensitivity (S, the dose required to properly expose the resist). Based on scaling arguments and empirical evidence, Tom Wallow and coworkers have found that, for a given level of resist technology [2],

R^3 L^2 S = constant

Since throughput is limited by the available intensity from the EUV source, we find that, for a fixed amount of LER,

Throughput ~ 1/S ~ R^3

Finally, since the number of pixels in a lens field is fixed, the areal throughput will be

At ~ (lens field size) R^3 ~ R^5

Thus, like direct-write lithographies, EUV lithography obeys Tennant’s law. This is bad news. This means that EUV will suffer from same disastrous economics as direct-write lithography: shrinking the feature size by a factor of 2 produces a factor of 32 lower throughput.

Ah, but the comparison is not quite fair. For projection lithography, Tennant’s constant is not only large, it increases rapidly over time. Tim Brunner first noted this in what I call Brunner’s Corollary [3]: over time, optical lithography tends to increase Tennant’s constant at a rate that more than makes up for the R^2 dependence of the lens field size. As a result, optical lithography actually increases areal throughput while simultaneously improving resolution for each new generation of technology. Roughly, it seems that Tennant’s constant has been inversely proportional to about R^2.5 as R shrank with each technology node.

But that was before EUV, and before the R^3 dependence of the RLS trade-off kicked in. At best, we might hope for an effective Tennant’s law over time that sees throughput go as R^2. This is still very bad. This means that for every technology node (when feature sizes shrink by 70%) we’ll need our source power to double just to keep the throughput constant. The only way out of this dilemma is to break the RLS “triangle of death” so that resolution can improve without more dose and worse LER.

Is the RLS trade-off breakable? Can LER be lowered without using more dose? This is a topic receiving considerable attention and research effort today. We’ll have to stay tuned over the next few years to find out. But for all the risks involved with EUV lithography for semiconductor manufacturing , we can add one more: Tennant’s law.

[1] Donald M. Tennant, Chapter 4, “Limits of Conventional Lithography”, in Nanotechnology, Gregory Timp Ed., Springer (1999) p. 164.
[2] Thomas Wallow, et al., “Evaluation of EUV resist materials for use at the 32 nm half-pitch node”, Proc. SPIE 6921, 69211F (2008).
[3] T. A. Brunner, “Why optical lithography will live forever”, JVST B 21(6), p. 2632 (2003).

Tennant’s Law

It’s hard to make things small. It’s even harder to make things small cheaply.

I was recently re-reading Tim Brunner’s wonderful paper from 2003, “Why optical lithography will live forever” [1] when I was reminded of Tennant’s Law [2,3]. Don Tennant spent 27 years working in lithography-related fields at Bell Labs, and has been running the Cornell NanoScale Science and Technology Facility (CNF) for the last five. In 1999 he plotted up an interesting trend for direct-write-like lithography technologies: There is a power-law relationship between areal throughput (the area of a wafer that can be printed per unit time) and the resolution that can be obtained. Putting resolution (R) in nm and areal throughput (At) in nm2/s, his empirically observed relationship looks like this:

At = 4.3 R^5

Even though the proportionality constant (4.3) represents a snapshot of technology capability circa 1995, this is not a good trend. When cutting the resolution in half (at a given level of technology capability), the throughput decreases by a factor of 32. Yikes. That is not good for manufacturing.

What’s behind Tennant’s Law, and is there any way around it? The first and most obvious problem with direct-write lithography is the pixel problem. Defining one pixel element as the resolution squared, a constant rate of writing pixels will lead to a throughput that goes as R^2. In this scenario, we always get an areal throughput hit when improving resolution just because we are increasing the number of pixels we have to write. Dramatic increases in pixel writing speed must accompany resolution improvement just to keep the throughput constant.

But Tennant’s Law shows us that we don’t keep the pixel writing rate constant. In fact, the pixel throughput (At/R^2) goes as R^3. In other words, writing a small pixel takes much longer than writing a big pixel. Why? While the answer depends on the specific direct-write technology, there are two general reasons. First, the sensitivity of the photoresist goes down as the resolution improves. For electron-beam lithography, higher resolution comes from using a higher energy (at least to a point), since higher-energy electrons exhibit less forward scattering, and thus less blurring within the resist. But higher-energy electrons also transfer less energy to the resist, thus lowering resist sensitivity. The relationship is fundamental: scattering, the mechanism that allows an electron to impart energy to the photoresist, also causes a blurring of the image and a loss of resolution. Thus, reducing the blurring to improve resolution necessarily results in lower sensitivity and thus lower throughput.

(As an aside, higher electron energy results in greater backscattering, so there is a limit to how far resolution can be improved by going to higher energy.)

Chemically amplified (CA) resists have their own throughput versus resolution trade-off. CA resists can be made more sensitive by increasing the amount of baking done after exposure. But this necessarily results in a longer diffusion length of the reactive species (the acid generated by exposure). The greater sensitivity comes from one acid (the result of exposure) diffusing around and finding multiple polymer sites to react with, thus “amplifying” the effects of exposure and improving sensitivity. But increased diffusion worsens resolution – the diffusion length must be kept smaller than the feature size in order to form a feature.

Charged particle beam systems have another throughput/resolution problem: like charges repel. Cranking up the current to get more electrons to the resist faster (that is, increasing the electron flux) crowds the electrons together, increasing the amount of electron-electron repulsion and blurring the resulting image. These space-charge effects ultimately doomed the otherwise intriguing SCALPEL projection e-beam lithography approach [4].

The second reason that smaller pixels require more write time has to do with the greater precision required when writing a small pixel. Since lithography control requirements scale as the feature size (a typical specification for linewidth control is ±10%), one can’t simply write a smaller pixel with the same level of care as a larger one. And it’s hard to be careful and fast at the same time.

One reason why smaller pixels are harder to control is the stochastic effects of exposure: as you decrease the number of electrons (or photons) per pixel, the statistical uncertainty in the number of electrons or photons actually used goes up. The uncertainty produces linewidth errors, most readily observed as linewidth roughness (LWR). To combat the growing uncertainty in smaller pixels, a higher dose is required.

Other throughput limiters can also come into play for direct-write lithography, such as the data rate (one must be able to supply the information as to which pixels are on or off at a rate at least as fast as the pixel writing rate), or stage motion speed. But assuming that these limiters can be swept away with good engineering, Tennant’s Law still leaves us with two important dilemmas: as we improve resolution we are forced to write more pixels, and the time to write each pixel increases.

For proponents of direct-write lithography, the solution to its throughput problems lies with multiple beams. Setting aside the immense engineering challenges involved with controlling hundreds or thousands of beams to a manufacturing level of precision and reliability, does a multiple-beam approach really get us around Tennant’s Law? Not easily. We still have the same two problems. Every IC technology node increases the number of pixels that need to be written by a factor of 2 over the previous node, necessitating a machine with at least twice the number of beams. But since each smaller pixel takes longer to write, the real increase in the number of beams is likely to be much larger (more likely a factor of 4 rather than 2). Even if the economics of multi-beam lithography can be made to work for one technology node, it will look very bad for the next technology node. In other words, writing one pixel at a time does not scale well, even when using multiple beams.

In a future post, I’ll talk about why Tennant’s Law has not been a factor in optical lithography – until now.

[1] T. A. Brunner, “Why optical lithography will live forever”, JVST B 21(6), p. 2632 (2003).
[2] Donald M. Tennant, Chapter 4, “Limits of Conventional Lithography”, in Nanotechnology, Gregory Timp Ed., Springer (1999) p. 164.
[3] Not to be confused with Roy Tennant’s Law of Library Science: “Only librarians like to search, everyone else likes to find.”
[4] J.A. Liddle, et al., “Space-charge effects in projection electron-beam lithography: Results from the SCALPEL proof-of-lithography system”, JVST B 19(2), p. 476 (2001).

A view from the top (20)

It is an article of faith among semiconductor industry watchers that the last 20 years have seen considerable consolidation among semiconductor makers, with further consolidation all but inevitable. Of course, we can all point to mergers (TI and National being the latest) and players exiting from the market (NEC was the #1 chipmaker in the world in 1991, but now is out of the business). But does the data support this view of rampant consolidation?

I’ve been looking over 24 years of annual top 20 semiconductor company revenue data compiled by Gartner Dataquest (1987 – 1999) and iSupply (2000 – 2010), and the results show a more nuanced picture. As I noted in my last post on this topic, foundries are excluded from this accounting – their revenue is attributed to the companies placing the orders. Thus, this is a semiconductor product-based top-20 list, not a semiconductor fab-based top-20 list. With that in mind, let’s look at the trends.

Consider first, the fraction of the total semiconductor market controlled by the top 20 semiconductor companies. The trendline shows a 15% drop in market share over 24 years for the top 20, or about a 0.7% decline on average each year. In other words, the rest of the semiconductor companies (those not in the top 20) saw their market share grow dramatically, from 23% to 38% or so.

Semiconductor Top 20 Market Share

Likewise, the top 10 semiconductor companies saw their market share drop by ten points, from about 56% to 46% (or about 0.45% per year). The top five companies, on the other hand, kept about a constant share of 1/3 of the market since 1987. The trendline has a slope not significantly different from zero (-0.1% per year).

Semiconductor Top 5 Market Share

But it’s the top two semiconductor makers that show the most interesting trend. The top 2 have seen a 6% rise in their market share, to 22% today, for an increase of about 0.3% per year. The top three makers have seen a more modest 0.15% increase in market share per year since 1987. Thus, consolidation of market share has only come at the very top of the market, the top 2 to be specific. For the rest of the industry, there has be spreading out of the market among more players. Those top 2 players are now, of course, Intel and Samsung. But in 1987 they were NEC and Toshiba (Intel was #10 then, and Samsung wasn’t on the list).

Semiconductor Top 2 Market Share

So is the megatrend of semiconductor industry consolidation a myth? Yes and no. From a product perspective, the data is clear. The top two companies have grown in dominance, but for the remaining 80% of the market or so revenue is being spread over a wider array of companies over time. Foundries can be given some credit for the increased democratization of the market, but the trends were in place before foundries even came into existence. In fact, it is more accurate to say that foundries are a result rather than a cause of this democratization. It is the nature of the semiconductor product itself which has driven this increase in the long tail of the distribution of companies.

While there have always been a few blockbuster product categories (memory and microprocessors) where size matters, the vast majority of semiconductor revenue comes from niche (or at least small market share) products. Big companies don’t excel at making lots of niche products. Thus, small to medium-sized companies who stay close to their customers are able to compete well against their larger rivals. It is likely that this trend will continue so long as Moore’s Law continues.

Moore’s Law keeps the few big players still able to invest in new fabs quite busy, and they need big market categories to justify their big investments. There has been considerable consolidation in the industry if you consider fabs rather than products, since there are now only about five companies that are likely to stay at the front of Moore’s Law over the next few years. And these top five manufacturers have seen growth in their share of fab output. But I doubt that a smaller number of fabs competing at the very high-end of the market will somehow reverse the trend of dispersion for the other 80% of the market. That is, until Moore’s Law ends. Then, these big companies with their big fabs are likely to turn their attentions to markets that seemed to diffuse to worry about. What happens then, in a post-Moore’s Law world, is anyone’s guess.

The top 20 ain’t what it used to be

Looking back on data of the annual top 20 semiconductor companies since 1987, it’s amazing how much has changed. In my last post on this topic, I looked at all the companies that went bankrupt, spun-out, or merged their way into or out of the top 20 list. Change is definitely a constant in this field. Now, let’s look at the makeup of the 2010 list of top semiconductor companies. Here is the list, as generated by iSuppli.

1 Intel Corporation
2 Samsung Electronics
3 Toshiba Semiconductor
4 Texas Instruments
5 Renesas Electronics
6 Hynix
7 STMicroelectronics
8 Micron Technology
9 Qualcomm
10 Broadcom
11 Elpida Memory
12 Advanced Micro Devices
13 Infineon Technologies
14 Sony
15 Panasonic Corporation
16 Freescale Semiconductor
17 NXP
18 Marvell Technology Group
19 MediaTek
20 NVIDIA

It’s important to note that foundries are excluded from this accounting – their revenue is attributed to the companies placing the orders. Thus, this is a semiconductor product-based top-20 list, not a semiconductor maker-based top-20 list.

And that distinction is obvious when looking at the make-up of the 2010 top-20. Six of the top 20 companies are fabless. Another seven are “fab-lite”, meaning they have stopped investing in new fabs or leading-edge manufacturing. That leaves just seven leading-edge semiconductor manufacturers in the top 20. Of those, four make mostly memory (80% of Samsung’s revenue came from memory), two make mostly logic, and one (Toshiba) makes a fair amount of both.

As a point of reference, if TSMC’s revenue were attributed to TSMC rather than their customers, they would be in fourth place, just barely behind Toshiba. The next two largest foundries, UMC and GlobalFoundries, would find themselves near the bottom of the top 20.

So, we have seven semiconductor manufacturers and three foundries that claim to still want to invest in leading-edge manufacturing capacity. That’s a far cry from just 10 years ago, when all 20 of the top 20 semiconductor companies were committed to building new leading-edge fabs. And even this list of 10 companies can’t really afford to play at the bleeding edge. Only five of them (Intel, Samsung, Toshiba, TSMC, and Hynix) have over $10B/year in semiconductor revenue, probably the minimum needed to build that next $5B mega fab. Add EUV and 450mm wafers into the mix, and you can see that there will be very few players at this ultra-high end of manufacturing.

It is conventional wisdom that the last decade has been one of extreme consolidation in the semiconductor business. Next, I’ll look at the numbers to see how well that conventional wisdom holds up.

What to do with an old lithography tool?

So you’ve got an old lithography tool hanging around. It doesn’t have the resolution (or any other spec) needed for production of pretty much anything that anyone wants to make. What can you do with it?

One option is to sell it to a Hollywood prop house. Apparently, that is what someone did with an old Cobilt mask aligner (at least, I think it is a Cobilt). It has probably shown up in several movies, but the one I saw it in was Silent Running, a good but not great sci-fi movie from 1972. Here are some shots from the movie.

Cobilt Aligner in Silent Running
Bruce Dern as Freeman Lowell limping past the mask aligner after murdering his crewmates.

Cobilt Aligner in Silent Running

Cobilt Aligner in Silent Running
Lowell using the mask aligner to reprogram the company droids to answer to him.

You can’t keep a good lithography tool down, not if you have a little imagination.

Is EUV the SST of Lithography?

Analogies with Moore’s Law abound. Virtually any trend looks linear on a log-linear plot if the time period is short enough. Some people hopefully compare their industry’s recent history to Moore’s Law, wishfully predicting future success with the air of inevitability that is usually attached to Moore’s Law. Others look to some past trend in the hopes of understanding the future of Moore’s Law. A common analogy of the latter sort is the trend of airplane speed in the last century.

Airspeed Trend

Plotting the cruising speed of new planes against their first year of commercial use, the trend from the 1910s to the 1950s was linear on a log-scale, just like a Moore’s Law plot. But then something different happens. As airspeed approaches the speed of sound, the trend levels off – a physical limit changed the economics of air travel. The equivalent of Moore’s Law for air travel had ended.

For me, the interesting data point is the Concord Supersonic Transport (SST). First flown commercially in 1976, the Mach 2 jet was perfectly in line with the historical log-speed trend of the first 50 years of the industry. And the SST was a technical success – it did everything that was expected of it. Except, of course, make money. The economic limit had been reached, but that didn’t stop many bright people from insisting that the trend must continue, spending billions to make it so. But technological invention couldn’t change the economic picture, and supersonic transportation never caught on.

So here goes my analogy. I think extreme ultraviolet (EUV) will be the SST of lithography. I have little doubt that the technology can be made to work. If it fails (I hope it won’t, but I think it will), the failure will be economic. Like the SST, EUV lithography will never be economical to operate in a mass (manufacturing) market. We can do it, but that doesn’t mean we should.

Of course, this analogy is imperfect, as all such analogies are. Air travel went through just three doublings of speed in 50 years, as opposed to the 36 doublings of transistor count per chip in the last 50 years of semiconductor manufacturing. And the economics of the industries are hardly the same. Still, the analogy has enough weight to make one think. We’ll know soon enough – EUV lithography will likely succeed or fail in the next two years.

As an aside, the first time I heard someone mention the analogy between airspeed and transistor trends was in the early 1990s, when Richard Freeman of AT&T gave a talk. The subject of his presentation? Soft x-ray lithography, what we now call EUV.

Frits Zernike, Jr., 1931 – 2011

Lithography lost one of its own on July 12 with the death of Frits Zernike Jr. to Parkinson’s disease. Here is his obit from the New York Times:

Born and educated in Groningen, the Netherlands. A physicist with Perkin-Elmer Corp., Silicon Valley Group and Carl Zeiss, and first manager for Dept. of Energy’s Extreme Ultraviolet Lithography Program. Survived by his wife of 49 years, Barbara Backus Zernike, children Frits III, Harry, and Kate, daughter- and son-in-law Jennifer Wu and Jonathan Schwartz, and three grandchildren: Frits and Nicolaas Schwartz and Anders Zernike. Memorial service will be 3pm Thursday, July 28, at Essex Yacht Club, Novelty Lane, Essex, CT. Donations in his memory may be made to Dance for Parkinson’s, c/o NMS, 100 Audubon St, New Haven, CT 06510, or Community Music School, P.O. Box 387, Centerbrook, CT 06409.

Here is an excerpt from a post I made to this blog on February 27, 2009 concerning Frits:

“It was seven years ago that SPIE approached me with the idea of creating a major SPIE award in microlithography. I agreed to head up the effort, and gathered together a committee of other lithographers to establish the award process. Someone on the committee suggested naming the award after Frits Zernike, for three reasons. First, no major optical award had been named in his honor, even though the scientific contributions of this Nobel prize winner are legion. Second, the name has high recognition in the optical lithography community due to the ubiquitous use of the Zernike polynomial for describing lens aberrations. The third reason is more personal – Zernike’s son, Frits Zernike Jr., worked for many years in the field of lithography at Perkin-Elmer and later SVG Lithography before retiring. Some of us on the committee knew him, and when contacted he was very supportive of an award named for his father.”