All posts by Chris

Bacus 2012 – I Want My Mask for Free

Alas, I was not able to attend the Bacus maskmaking conference last week in Monterey, California. Although smaller now than in its heyday, it is still a fun conference to go to. But thanks to some talented and enterprising Baccanalians, a little of the flavor of the conference is available on YouTube:

http://www.youtube.com/watch?v=2HgJc9UMvd0&feature=share

I don’t know who all the folks are who made this, though I do recognize Mark Mason (the Aggie) and it must certainly be Tony Vacca on the drums.

For a little history on Bacus entertainment, look here.

Deriving the Tesla Roadster

The Tesla Roadster is just about the coolest car on the planet. Starting with the body of a Lotus Elise, Tesla adds an electric engine and a bunch of laptop batteries to create fabulous style, amazing speed, and a perfectly green image. What more could a car buff want?

OK, one could hope for a slightly lower price. But I have a separate problem. I already own a Lotus Elise (since 2005, the first year it was available in the US), and I am attached to it. I still need a family car, since the kids aren’t old enough to ride in the Elise, and I am still a believer in electric vehicles as the future of personal transportation (with that future starting now).

As such, circumstances have created a workable compromise. Last week I bought a Nissan Leaf – an all-electric vehicle. I love it. And with the purchase I was able to derive the following equation:

1 Lotus Elise + 1 Nissan Leaf + $20K = 1 Tesla Roadster

The numbers don’t lie.

Postscript to 450-mm wafers

After posting on Why 450-mm Wafers and Why the Big Players Like 450-mm Wafers, I received a few comments from friends in the equipment supplier community talking about the effect of wafer-size transitions on the suppliers of process and metrology tools for semiconductor manufacturing. So, based on their inputs and further reflection, here are a few more thoughts on 450-mm wafers.

It is expensive to develop equipment to process larger wafers. If an equipment supplier spends a boatload of money developing new equipment, they want to sell that new equipment for a lot of money in order to recoup their investment. But their customers, the chip makers, don’t want the equipment prices to rise too much, or else the cost advantage of the larger wafer size will disappear. The goal should be a win-win sharing of the benefits of a lager wafer: the chip makers get a lower manufacturing cost per chip and the equipment makers get a higher margin on their equipment, thus paying off their R&D and making more money after that.

There is a general feeling in the industry that the transition to 300-mm wafers didn’t work out equitably: the equipment suppliers made all the investments, and the chip makers got all the benefits. And while I’m sure this version of the story is somewhat slanted, still we have seen most equipment suppliers dragging their feet on 450-mm tool development. They want the chip companies to pay up-front for development. Chip companies in turn want to get governments to foot the bill (why should a highly profitable company like Intel pay the costs needed to ensure future profits if they can get the state of New York to pay instead?). And so it has begun: the Global 450 Consortium funding tool R&D, and Intel, TSMC, and Samsung paying litho supplier ASML billions of dollars directly for 450-mm tool development.

How will a transition to 450-mm wafers affect the equipment suppliers? One effect is similar to that experienced by the chip makers: the small guys won’t survive. Only the bigger players can afford the development costs for 450-mm wafer size tools. But there has traditionally been a second effect: even the big players can’t afford the development costs of new process equipment on multiple wafer sizes.

When the industry moved to 300-mm wafers, new process tools were developed for 300-mm wafers only. Chip companies that stuck to 200-mm wafers couldn’t get the latest and greatest tools for the smaller wafer size. They were stuck in the past. Not only did they have a cost disadvantage compared to 300-mm fabs, they had a technology disadvantage as well. Staying up to speed on Moore’s Law required moving to 300-mm wafers.

Will the same thing happen at 450 mm? Maybe, but I’m not convinced that it is inevitable. As I said before, the move to 450-mm wafers will not likely be the slam-dunk cost savings that many people hope. If the cost advantage is only 10%, I suspect many companies will choose to stick with 300-mm wafers. But will the next generation of process tools be available at the smaller wafer size? If new 300-mm wafer fabs are being built, you can bet that equipment suppliers will scramble to provide them with tools.

All in all, I think the move to 450-mm wafers will be a mess. The timing is problematic, the economics are problematic, and the resemblance of the future to the past is not likely to be strong. Somehow, though, we’ll figure something out. We always do.

Predicting the future is hard

I can’t say that I am good at predicting the future. Then again, I don’t try to make a living off of it. Ray Kurzweil is a futurist who regular talks about how great technology will be in the 2020s by extrapolating trends like Moore’s Law (and, in fact, accelerating them). Will his predictions come true? Actually, we can make a prediction about that based in his past performance.

Here he is, in a 2005 TED talk:

“By 2010 computers will disappear. They’ll be so small, they’ll be embedded in our clothing, in our environment. Images will be written directly to our retina, providing full-immersion virtual reality, augmented real reality. We’ll be interacting with virtual personalities.”

I don’t know about you, but my computer has yet to disappear. And thankfully, I still interact with non-virtual personalities.

He was way off making a prediction five years into the future. I suspect he will only be further off in his further out predictions. Still, I bet if you ask Ray Kurzweil he’ll tell you he was dead on with this prediction. He always does:

http://www.forbes.com/sites/alexknapp/2012/03/20/ray-kurzweils-predictions-for-2009-were-mostly-inaccurate/

http://www.forbes.com/sites/alexknapp/2012/03/21/ray-kurzweil-defends-his-2009-predictions/

Why the Big Players Like 450-mm Wafers

The reason semiconductor manufacturers like the idea of 450-mm wafers is easy to understand: bigger wafers should lower the per-chip cost of manufacturing. But as I mentioned in my last post, this per-chip cost advantage doesn’t apply to lithography. Each time a wafer size is increased, only the non-litho (per-chip) costs go down, and so lithography costs take up a bigger portion of the overall costs. A corollary to this economic reality is that as lithography costs go up as a fraction of the total costs, the benefits of a larger wafer size go down. Past wafer size transitions have netted a 30% manufacturing cost reduction. The transition to 450-mm wafers will give at best a 20% cost reduction, and possibly only a 10% reduction.

Of course, these numbers are projections, and all projections are based on assumptions. It is possible to make more optimistic assumptions than I have, and that is probably what Intel, TSMC and the other big players are doing when they heavily promote 450-mm wafers. But why are the big guys so optimistic about 450-mm wafers? And why now?

As I briefly mentioned in my last post, for the switch to larger wafer sizes to be economically feasible two things must happen. First, the switch must enable a lower manufacturing cost per chip. The big players are hoping for a 30% cost reduction, but I am predicting a 10 – 20% benefit. Second, there must be sufficient demand for the chips being produced to justify a higher volume factory. A 450-mm fab will have at least double the output (in terms of chips) as a 300-mm fab. Thus, the demand for those chips must at least double to justify the building of a 450-mm fab. That’s a huge volume of chips, since 300-mm fabs are already exceedingly high-volume.

So an important effect of each wafer transition is that low-volume manufacturers can no longer compete. A 30% cost disadvantage is hard to overcome, and without the volume demand a new fab at the larger wafer size isn’t justified. The result? A successful wafer size transition is accompanied by a host of consolidations and chip companies going fabless (or fab-lite). This has happened again and again over the years. Only the biggest players survive, and the survivors get bigger.

Today, we have Intel, Samsung, Toshiba and TSMC at the top of the chip-making pyramid. But UMC, GlobalFoundries, Hynix, and Micron remain competitive irritants. What to do? A successful transition to 450-mm wafers will likely solve the problem for the big players. If 450-mm wafers result in a 20 – 30% cost advantage over 300-mm wafers, then any standard-process chip in a cost competitive space will have to be made in a 450-mm fab. But only a few of these $10B fabs will have to be built to supply that demand. And those fabs will be built by the biggest players, leaving the second tier manufacturers out of luck, and possibly out of business.

So why shouldn’t Intel, Samsung, and TSMC be bullish on 450-mm? If it works, it will mean that their dominance in the semiconductor world will be complete (maybe even pushing Toshiba out of the picture). And if EUV succeeds in keeping litho costs down, this scenario is all the more likely.

But personally I don’t think EUV will be successful at putting a lid on litho cost. As a result, I think the cost advantage of 450-mm will be closer to 10% than the 20 – 30% hoped for by the big guys. And while 10% may still be worth it for the highest-volume players, it won’t be enough to put the 300-mm fab world out of business.

That leaves one more ugly point to consider. If a transition to 450-mm wafers gives a per-chip cost reduction that is not sufficiently large to counter the rising costs of litho, then the per-chip costs overall might be higher (and maybe a lot higher) for new technology nodes. What will happen to Moore’s Law if moving to the next node no longer decreases the cost of a transistor?

We live in interesting times, and getting more interesting each day.

Why 450-mm wafers?

Why is 450-mm development so important to Intel (and Samsung and TSMC)?

A few years ago, Intel and TSMC began heavily promoting the need for a transition from the current standard silicon wafer size, 300 mm, to the new 450-mm wafers. While many have worked on 450-mm standards and technology for years, it is only recently that the larger wafer has received enough attention and support (not to mention government funding) to believe that it may actually become real. While there has been much talk about the need for a larger wafer, I’d like to put my spin on the whole debate.

First, a bit of history. Silicon wafer sizes have been growing gradually and steadily for the last 50 years, from half-inch and one-inch silicon to today’s 300-mm diameter wafers. The historical reasons for this wafer size growth were based on three related trends: growing chip size, growing demand for chips, and the greater chip throughput (and thus lower chip cost) that the larger wafer sizes enabled. And while chip sizes stopped increasing about 15 years ago, the other two factors have remained compelling. The last two wafer size transitions (6 inch to 8 inch/200 mm, and 200 mm to 300 mm) each resulted in about a 30% reduction in the cost per area of silicon (and thus cost per chip). And since our industry is enamored with the thought that the future will look like the past, we are hoping for a repeat performance with the transition to 450-mm wafers.

But a closer look at this history, and what we can expect from the future, reveals a more complicated picture.

First, how does increasing wafer size lower the cost per unit area of silicon? Consider one process step as an example – etch. Maximum throughput of an etch tool is governed by two basic factors: wafer load/unload time and etch time. With good engineering there is little reason why these two times won’t remain the same as the wafer size increases. Thus, wafer throughput remains constant as a function of wafer size, so that chip throughput improves as the wafer size increases. But “good engineering” is not free, and it takes work to keep the etch uniformity the same for a larger wafer. The larger etch tools also cost more money to make. But if the tool cost does not increase as fast as the wafer area, the result is a lower cost per chip. This is the goal, and the reason why we pursue larger wafer sizes.

As a simplified example, consider a wafer diameter increase of 1.5X (say, from 200 mm to 300 mm). The wafer area (and thus the approximate number of chips) increases by 2.25. If the cost of the etcher, the amount of fab floor space, and the per-wafer cost of process chemicals all increase by 30% at 300 mm, the cost per chip will change by 1.3/2.25 = 0.58. Thus, the etch cost per chip will be 42% lower for 300-mm wafers compared to 200-mm wafers.

While many process steps have the same fundamental scaling as etch – wafer throughput is almost independent of wafer size – some process steps do not. In particular, lithography does not scale this way. Lithography field size (the area of the wafer exposed at one time) has been the same for nearly 20 years (since the era of step-and-scan), and there is almost zero likelihood that it will increase in the near future. Further, the exposure time for a point on the wafer for most litho processes is limited by the speed with which the tool can step and scan the wafer (since the light source provides more than enough power).

Like etch, the total litho process time is the wafer load/unload time plus the exposure time. The load time can be kept constant as a function of wafer size, but the exposure time increases as the wafer size increases. In fact, it takes great effort to keep the scanning and stepping speed from slowing down for a larger wafer due to the greater wafer and wafer stage mass that must be moved. And since wafer load/unload time is a very small fraction of the total process time, the result for lithography is a near-constant wafer-area throughput (rather than the constant wafer throughput for etch) as wafer size is changed.

One important but frequently overlooked consequence of litho throughput scaling is that each change in wafer size results in an increase in the fraction of the wafer costs caused by lithography. In the days of 6-inch wafers, lithography represented roughly 20 – 25% of the cost of making a chip. The transition to 200-mm (8-inch) wafers lowered the (per-chip) cost of all process steps except lithography. As a result, the overall per-chip processing costs went down by about 25 – 30%, but the per-chip lithography costs remained constant and thus become 30 – 35% of the cost of making a chip.

The transition to 200-mm wafers increased the wafer area by 1.78. But since lithography accounted for only 25% of the chip cost at the smaller 6-inch wafer size, that area improvement affected 75% of the chip cost and gave a nice 25 – 30% drop in overall cost. The transition to 300-mm wafers gave a bigger 2.25X area advantage. However, that advantage could only be applied to the 65% of the costs that were non-litho. The result was again a 30% reduction in overall per-chip processing costs. But after the transition, with 300-mm wafers, lithography accounted for about 50% of the chip-making cost.

Every time wafer size increases, the importance of lithography to the overall cost of making a chip grows.

And there lies the big problem with the next wafer size transition. Each wafer size increase affects only the non-litho costs, but those non-litho costs are becoming a smaller fraction of the total because of wafer size increases. Even if we can achieve the same cost savings for the non-litho steps in the 300/450 transition as we did for the 200/300 transition, its overall impact will be less. Instead of the hoped-for 30% reduction in per-chip costs, we are likely to see only a 20% drop in costs, at best.

So we must set our sights lower: past wafer size transitions gave us a 30% cost advantage, but 450-mm wafers will only give us a 20% cost benefit over 300-mm wafers. Is that good enough? It might be, if all goes well. But the analysis above applies to a world that is quickly slipping away – the world of single-patterning lithography. If 450-mm wafer tools were here today, maybe this 20% cost savings could be had. But shrinking feature sizes are requiring the use of expensive double-patterning techniques, and as a result lithography costs are growing. They are growing on a per-chip basis, and as a fraction of the total costs. And as lithography costs go up, the benefits of a larger wafer size go down.

Consider a potential “worst-case” scenario: at the time of a transition to 450-mm wafers, lithography accounts for 75% of the cost of making a chip. Let’s also assume that switching to 450-mm wafers does not change the per-chip litho costs, but lowers the rest of the costs by 40%. The result? An overall 10% drop in the per-chip cost. Is the investment and effort involved in 450-mm development worth it for a 10% drop in manufacturing costs? And is that cost decrease enough to counter rising litho costs and keep Moore’s Law alive?

Maybe my worst-case scenario is too pessimistic. In five or six years, when a complete 450-mm tool set might be ready, what will lithography be like? In one scenario, we’ll be doing double patterning with EUV lithography. Does anyone really believe that this will cost the same as single-patterning 193-immersion? I don’t. And what if 193-immersion quadruple patterning is being used instead? Again, the only reasonable assumption will be that lithography accounts for much more than 50% of the cost of chip production.

So what can we conclude? A transition to 450-mm wafers, if all goes perfectly (and that’s a big if), will give us less than 20% cost improvement, and possibly as low as 10%. Still, the big guys (Intel, TSMC, IBM, etc.) keep saying that 450-mm wafers will deliver 30% cost improvements. Why? Next time, I’ll give my armchair-quarterback analysis as to what the big guys are up to.

Semicon West Lithography Report

OK, I have to admit this right off: I didn’t go to Semicon West (held two weeks ago in San Francisco). I try never to go to Semicon West (I’ve been twice in the last 30 years, both times against my will). Why should I go? To listen to the latest marketing messages and company spin? To see a few technical talks that are way too light on the technical, but still full of talk? I don’t need to walk the cavernous Moscone Center to get that – everybody plasters the Web with this stuff on a regular basis. Thanks, but I think I’ll stay home.

This year was a perfect case in point. The only real news from Semicon was in the news – Intel’s announced investment in ASML. Yes, it would have been fun to sit in a San Francisco bar each evening and dissect the press releases and develop conspiracy theories. But even that is not really necessary. I’m here to give my you take on what the Intel investment means – and you don’t even have to buy me a beer to get it. (Though if you like this post, please feel free to buy me one the next time you see me.)

Intel’s investment in ASML has two parts – related, but separate. First, Intel is spending $2.1B to buy 10% of ASML, with an option to buy another 5%. ASML will use the money to buy back the same number of its shares, so there will be no stock dilution (a so-called synthetic buyback). That also means ASML will be getting nothing (no money, I mean) from this part of the deal. ASML is also offering similar deals to Samsung and TSMC, up to 25% ownership in the company. So what does this part of the deal mean? Intel and ASML made it clear that Intel gets no voting rights and won’t get early access to ASML technology or tools. Of course, they had to say that to avoid anti-trust litigation. So does the Intel investment help anyone?

There are three reasons why the Intel investment in ASML makes sense. First, it confirms the obvious: the success or failure of ASML will be mirrored as success or failure at Intel. Lest anyone doubt it, Intel needs Moore’s Law scaling to continue its growth and profitability. Lithography is the critical technology to make that happen, and ASML is the critical company to make lithography happen. Second, even without a place on the board, Intel’s ownership stake will add financial stability to ASML, whose stock price could easily drop dramatically if its EUV program were to flirt with failure. Since ASML’s importance to the industry goes far beyond its EUV program, keeping ASML developing and manufacturing lithography tools is critical.

But the third reason the investment makes sense is that the stock purchase is coupled with a $1B Intel investment in ASML R&D. This $1B infusion is what the whole deal is about, and the investment has one purpose: to speed 450-mm tool development at ASML. For several years now, as talk of 450-mm wafer sizes has heated up to what appears to be a critical mass, ASML has repeatedly said that it can’t do EUV and 450-mm development at the same time. After EUV has succeeded, then ASML will commit to 450-mm tool development. But since the day of reckoning for EUV continues to push out (possibly to 2016 or later), that means lithography, representing 50% of the cost of making a chip, won’t be 450-mm ready nearly in time to meet the (overly optimistic) timetables of the big 450-mm proponents (Intel, Samsung, and TSMC).

So here comes the investment from Intel. While the press release mentioned the importance of both EUV and 450-mm R&D, the only project mentioned for funding was 450-mm tool development. And to be clear, this is not only, or even mostly, EUV 450-mm development. A working 450-mm fab will need 193-immersion tools, 193 dry tools, and maybe 248-nm tools as well, all running at the 450-mm wafer size. If EUV works, a fab will need 450-mm EUV tools as well, but this is the only part of the lithography tool set that is optional for a 450-mm fab. So, in my opinion, the Intel investment is all about the 450-mm wafer size, and has essential nothing to do with EUV lithography.

Why is 450-mm development so important to Intel (and Samsung and TSMC)? My answer to that question next time.

Douglas S. Goodman, 1947 – 2012

In pursuing a career in optical lithography, I’ve learned a lot about optics. When I graduated from college as an engineer I had the typical scant background in imaging, and I found the topic of partial coherence particularly opaque. Yes, all of the equations were in Born and Wolf, but that doesn’t mean I could understand them. That’s when I first discovered Doug Goodman, then working at IBM. He had developed a 2D optical imaging simulator and his papers on partial coherence approached the topic in a novel and enlightening way. I still had to read several other treatments before the ideas finally sunk in, but I instantly recognized that Doug Goodman had a unique way of explaining things. Taking a short course from him in the late 1980s cemented this opinion. When I needed to understand the impact of illumination aberrations on imaging about a decade later, I again turned to Doug’s papers to teach me.

I liked Doug because he was wicked smart, but also because he was quirky, with an odd and irreverent sense of humor that I always appreciated. He worked at IBM during the golden years of applied research, and was one of the extremely talented group of scientists and engineers working in lithography that so impressed me about IBM.

Doug loved to explain things on many different levels, especially using demonstrations. His classic 1995 paper “Optics demonstrations with an overhead projector” became a short course and then a book. Long after the tech world embraced Powerpoint and LCD projectors, Doug still gave talks with an overhead projector and hand-written transparencies, very much in a classic professorial style. The last paper I saw him give was at an SPIE lithography conference in 2004. The organizers had to dig up an overhead projector just for him. The topic was how to explain partial coherence. His talk included the use of a pyrex pan full of water.

Doug left IBM to work for Polaroid in 1993, and I saw him less frequently as he strayed from my field of lithography. I was glad to see him come back to lithography when he became a senior scientist at Corning Tropel in 2002. By then, the advance of his Parkinson’s disease was plain to see. He retired in 2007 and died on May 14, 2012 at the age of 65. Too young. He is missed.

Some links to obituaries for Doug:
http://spie.org/x87302.xml
http://www.osa.org/About_Osa/Newsroom/Obituaries/Goodman-Douglas.aspx
http://www.optics.arizona.edu/News/2012Newsletters/2012goodman-douglas-s.htm
http://hosting-25262.tributes.com/show/Douglas-S.-Goodman-93849644

The Power of Belief

Have you heard of power bands? The most popular brand is Power Balance, a company which “blend[s] the powers of Eastern Philosophy and Western Science with Innovative Technologies to deliver products that improve and enhance people’s lives.” Sounds impressive, eh? A power band (described by Power Balance as a “sports performance wristband”) is a silicone bracelet with holograms that “resonate with and respond to the natural energy field of the body.” [Unless you buy one from Lifestrength, a competing company whose identical-looking bracelets create “negative ions”.] According to numerous athletes paid to endorse the product, it really works.

There is only one problem. They cost $30. That’s a lot of money, even if it is virtually guaranteed to improve my life. That why I decided to buy a Placebo Band. It works in exactly the same way as the Power Balance band, with exactly the same results. But it only costs $4! What a deal! I couldn’t pass it up. Now I wear the power of belief around my wrist wherever I go. Shouldn’t you?

The Resolution Limit of Hard Drive Manufacturing

In lithography, pushing the limits of resolution is what we do. These efforts tend to get a lot of press. After all, the IC technology nodes are named after the smallest nominal dimensions printed with lithography (though the marketing folks who decide whether the next generation will be called the 16-nm or 14-nm node don’t care much about the opinions of lithographers). And the looming end of lithographic scaling has gotten all of us worried – regardless of your faith in EUV. Yes, resolution is the signature (though not the only) accomplishment of lithographers. That is why it is so important to carefully define what we mean by the term ‘resolution’ and understand why it is different for different tasks.

As I have said many times in papers, courses, and my textbook, the resolution one can achieve depends critically on the type of feature one is trying to print. In particular, the nature and limits of resolution are very different for dense patterns as compared to isolated patterns. For the last 10 years or so, the IC industry has been focused almost exclusively on pitch resolution – the smallest possible packing of dense lines and spaces. In optical lithography this resolution depends on the ratio of the wavelength (λ) to the imaging system numerical aperture (NA). For a single, standard lithographic patterning step there is a hard cut-off: the half-pitch will never drop below 0.25λ/NA (i.e., the pre-factor in this equation, called k1, has a lower limit of 0.25).

For 193-nm lithography, the NA has reached its maximum value of 1.35, so that the dense pattern resolution has bottomed out at a pitch of 80 nm. To go lower, one must use double patterning, or wait for Extreme Ultraviolet (EUV) lithography tools to drop the wavelength. Either way is costly, and the proper path past a 40-nm pitch is currently unknown.

But the resolution limit for an isolated feature is not so clear cut. While resolution still scales as λ/NA, there is no hard cut-off for k1. As k1 is lowered, lithography just gets harder. In particular, control of the feature width (called the critical dimension, CD) is harder as k1 goes lower. Thus, for isolated lines, resolution is all about CD control.

And that’s where lithography for hard drive read/write head manufacturing differs from IC manufacturing. When manufacturers like Seagate and Western Digital increase the areal density of their drives, you can bet there was a shrink in the feature size on some critical geometry of the read and write heads. And that feature is an isolated line printed with optical lithography.

So how small are the smallest isolated features printed at Seagate and Western Digital? While I don’t have the exact values, I do know they are on the same order as the smallest features obtained by IC lithography – when double patterning is used. In other words, today’s hard drive manufacturing requires 2x-nm lithography (isolated lines) using single patterning.

The CD control requirements for these critical features is about the same as for IC critical features: +/- 10% or so. Overlay is critical too, but maybe a bit relaxed compared to the standard 1/4 – 1/3 of feature size that is the rule of thumb in the IC world. But there are a few extra requirements that make read/write head litho challenging. The wafers are smaller than the standard 300mm diameter (it is a thick ceramic wafer, not silicon), with no plans for a change to 300 mm. On each wafer, tens of thousands of heads are made (the standard lot size is one to four wafers), so throughput is not quite as critical as for ICs. But this also means that none of the latest generation of lithography tools (such as 193 immersion) are available for this task (they are all 300-mm only tools). Not that these guys would buy an immersion tool anyway – hard disk manufacturing is extremely cost sensitive, so they make do with lower-NA 193 dry tools.

So let’s do the math. To print 2x-nm features with a moderate-NA 193 dry tool, the hard drive makers are doing single-pattern lithography with k1 below 0.1. This is remarkable! The IC lithographers have never attempted such a feat. How is it done? Of course, you use the strongest resolution enhancement techniques from the IC world you can find. After that, it’s all about CD control, which means attention to detail. Let’s give the hard drive folks the credit they deserve: lithography at k1 < 0.1 is hard. Lithography scaling pressures are at least as fierce in the hard drive world as in the IC world, so you can bet the minimum isolated line feature size will continue to shrink. It will be interesting to see how they do it.