All posts by Chris

Why 450-mm wafers?

Why is 450-mm development so important to Intel (and Samsung and TSMC)?

A few years ago, Intel and TSMC began heavily promoting the need for a transition from the current standard silicon wafer size, 300 mm, to the new 450-mm wafers. While many have worked on 450-mm standards and technology for years, it is only recently that the larger wafer has received enough attention and support (not to mention government funding) to believe that it may actually become real. While there has been much talk about the need for a larger wafer, I’d like to put my spin on the whole debate.

First, a bit of history. Silicon wafer sizes have been growing gradually and steadily for the last 50 years, from half-inch and one-inch silicon to today’s 300-mm diameter wafers. The historical reasons for this wafer size growth were based on three related trends: growing chip size, growing demand for chips, and the greater chip throughput (and thus lower chip cost) that the larger wafer sizes enabled. And while chip sizes stopped increasing about 15 years ago, the other two factors have remained compelling. The last two wafer size transitions (6 inch to 8 inch/200 mm, and 200 mm to 300 mm) each resulted in about a 30% reduction in the cost per area of silicon (and thus cost per chip). And since our industry is enamored with the thought that the future will look like the past, we are hoping for a repeat performance with the transition to 450-mm wafers.

But a closer look at this history, and what we can expect from the future, reveals a more complicated picture.

First, how does increasing wafer size lower the cost per unit area of silicon? Consider one process step as an example – etch. Maximum throughput of an etch tool is governed by two basic factors: wafer load/unload time and etch time. With good engineering there is little reason why these two times won’t remain the same as the wafer size increases. Thus, wafer throughput remains constant as a function of wafer size, so that chip throughput improves as the wafer size increases. But “good engineering” is not free, and it takes work to keep the etch uniformity the same for a larger wafer. The larger etch tools also cost more money to make. But if the tool cost does not increase as fast as the wafer area, the result is a lower cost per chip. This is the goal, and the reason why we pursue larger wafer sizes.

As a simplified example, consider a wafer diameter increase of 1.5X (say, from 200 mm to 300 mm). The wafer area (and thus the approximate number of chips) increases by 2.25. If the cost of the etcher, the amount of fab floor space, and the per-wafer cost of process chemicals all increase by 30% at 300 mm, the cost per chip will change by 1.3/2.25 = 0.58. Thus, the etch cost per chip will be 42% lower for 300-mm wafers compared to 200-mm wafers.

While many process steps have the same fundamental scaling as etch – wafer throughput is almost independent of wafer size – some process steps do not. In particular, lithography does not scale this way. Lithography field size (the area of the wafer exposed at one time) has been the same for nearly 20 years (since the era of step-and-scan), and there is almost zero likelihood that it will increase in the near future. Further, the exposure time for a point on the wafer for most litho processes is limited by the speed with which the tool can step and scan the wafer (since the light source provides more than enough power).

Like etch, the total litho process time is the wafer load/unload time plus the exposure time. The load time can be kept constant as a function of wafer size, but the exposure time increases as the wafer size increases. In fact, it takes great effort to keep the scanning and stepping speed from slowing down for a larger wafer due to the greater wafer and wafer stage mass that must be moved. And since wafer load/unload time is a very small fraction of the total process time, the result for lithography is a near-constant wafer-area throughput (rather than the constant wafer throughput for etch) as wafer size is changed.

One important but frequently overlooked consequence of litho throughput scaling is that each change in wafer size results in an increase in the fraction of the wafer costs caused by lithography. In the days of 6-inch wafers, lithography represented roughly 20 – 25% of the cost of making a chip. The transition to 200-mm (8-inch) wafers lowered the (per-chip) cost of all process steps except lithography. As a result, the overall per-chip processing costs went down by about 25 – 30%, but the per-chip lithography costs remained constant and thus become 30 – 35% of the cost of making a chip.

The transition to 200-mm wafers increased the wafer area by 1.78. But since lithography accounted for only 25% of the chip cost at the smaller 6-inch wafer size, that area improvement affected 75% of the chip cost and gave a nice 25 – 30% drop in overall cost. The transition to 300-mm wafers gave a bigger 2.25X area advantage. However, that advantage could only be applied to the 65% of the costs that were non-litho. The result was again a 30% reduction in overall per-chip processing costs. But after the transition, with 300-mm wafers, lithography accounted for about 50% of the chip-making cost.

Every time wafer size increases, the importance of lithography to the overall cost of making a chip grows.

And there lies the big problem with the next wafer size transition. Each wafer size increase affects only the non-litho costs, but those non-litho costs are becoming a smaller fraction of the total because of wafer size increases. Even if we can achieve the same cost savings for the non-litho steps in the 300/450 transition as we did for the 200/300 transition, its overall impact will be less. Instead of the hoped-for 30% reduction in per-chip costs, we are likely to see only a 20% drop in costs, at best.

So we must set our sights lower: past wafer size transitions gave us a 30% cost advantage, but 450-mm wafers will only give us a 20% cost benefit over 300-mm wafers. Is that good enough? It might be, if all goes well. But the analysis above applies to a world that is quickly slipping away – the world of single-patterning lithography. If 450-mm wafer tools were here today, maybe this 20% cost savings could be had. But shrinking feature sizes are requiring the use of expensive double-patterning techniques, and as a result lithography costs are growing. They are growing on a per-chip basis, and as a fraction of the total costs. And as lithography costs go up, the benefits of a larger wafer size go down.

Consider a potential “worst-case” scenario: at the time of a transition to 450-mm wafers, lithography accounts for 75% of the cost of making a chip. Let’s also assume that switching to 450-mm wafers does not change the per-chip litho costs, but lowers the rest of the costs by 40%. The result? An overall 10% drop in the per-chip cost. Is the investment and effort involved in 450-mm development worth it for a 10% drop in manufacturing costs? And is that cost decrease enough to counter rising litho costs and keep Moore’s Law alive?

Maybe my worst-case scenario is too pessimistic. In five or six years, when a complete 450-mm tool set might be ready, what will lithography be like? In one scenario, we’ll be doing double patterning with EUV lithography. Does anyone really believe that this will cost the same as single-patterning 193-immersion? I don’t. And what if 193-immersion quadruple patterning is being used instead? Again, the only reasonable assumption will be that lithography accounts for much more than 50% of the cost of chip production.

So what can we conclude? A transition to 450-mm wafers, if all goes perfectly (and that’s a big if), will give us less than 20% cost improvement, and possibly as low as 10%. Still, the big guys (Intel, TSMC, IBM, etc.) keep saying that 450-mm wafers will deliver 30% cost improvements. Why? Next time, I’ll give my armchair-quarterback analysis as to what the big guys are up to.

Semicon West Lithography Report

OK, I have to admit this right off: I didn’t go to Semicon West (held two weeks ago in San Francisco). I try never to go to Semicon West (I’ve been twice in the last 30 years, both times against my will). Why should I go? To listen to the latest marketing messages and company spin? To see a few technical talks that are way too light on the technical, but still full of talk? I don’t need to walk the cavernous Moscone Center to get that – everybody plasters the Web with this stuff on a regular basis. Thanks, but I think I’ll stay home.

This year was a perfect case in point. The only real news from Semicon was in the news – Intel’s announced investment in ASML. Yes, it would have been fun to sit in a San Francisco bar each evening and dissect the press releases and develop conspiracy theories. But even that is not really necessary. I’m here to give my you take on what the Intel investment means – and you don’t even have to buy me a beer to get it. (Though if you like this post, please feel free to buy me one the next time you see me.)

Intel’s investment in ASML has two parts – related, but separate. First, Intel is spending $2.1B to buy 10% of ASML, with an option to buy another 5%. ASML will use the money to buy back the same number of its shares, so there will be no stock dilution (a so-called synthetic buyback). That also means ASML will be getting nothing (no money, I mean) from this part of the deal. ASML is also offering similar deals to Samsung and TSMC, up to 25% ownership in the company. So what does this part of the deal mean? Intel and ASML made it clear that Intel gets no voting rights and won’t get early access to ASML technology or tools. Of course, they had to say that to avoid anti-trust litigation. So does the Intel investment help anyone?

There are three reasons why the Intel investment in ASML makes sense. First, it confirms the obvious: the success or failure of ASML will be mirrored as success or failure at Intel. Lest anyone doubt it, Intel needs Moore’s Law scaling to continue its growth and profitability. Lithography is the critical technology to make that happen, and ASML is the critical company to make lithography happen. Second, even without a place on the board, Intel’s ownership stake will add financial stability to ASML, whose stock price could easily drop dramatically if its EUV program were to flirt with failure. Since ASML’s importance to the industry goes far beyond its EUV program, keeping ASML developing and manufacturing lithography tools is critical.

But the third reason the investment makes sense is that the stock purchase is coupled with a $1B Intel investment in ASML R&D. This $1B infusion is what the whole deal is about, and the investment has one purpose: to speed 450-mm tool development at ASML. For several years now, as talk of 450-mm wafer sizes has heated up to what appears to be a critical mass, ASML has repeatedly said that it can’t do EUV and 450-mm development at the same time. After EUV has succeeded, then ASML will commit to 450-mm tool development. But since the day of reckoning for EUV continues to push out (possibly to 2016 or later), that means lithography, representing 50% of the cost of making a chip, won’t be 450-mm ready nearly in time to meet the (overly optimistic) timetables of the big 450-mm proponents (Intel, Samsung, and TSMC).

So here comes the investment from Intel. While the press release mentioned the importance of both EUV and 450-mm R&D, the only project mentioned for funding was 450-mm tool development. And to be clear, this is not only, or even mostly, EUV 450-mm development. A working 450-mm fab will need 193-immersion tools, 193 dry tools, and maybe 248-nm tools as well, all running at the 450-mm wafer size. If EUV works, a fab will need 450-mm EUV tools as well, but this is the only part of the lithography tool set that is optional for a 450-mm fab. So, in my opinion, the Intel investment is all about the 450-mm wafer size, and has essential nothing to do with EUV lithography.

Why is 450-mm development so important to Intel (and Samsung and TSMC)? My answer to that question next time.

Douglas S. Goodman, 1947 – 2012

In pursuing a career in optical lithography, I’ve learned a lot about optics. When I graduated from college as an engineer I had the typical scant background in imaging, and I found the topic of partial coherence particularly opaque. Yes, all of the equations were in Born and Wolf, but that doesn’t mean I could understand them. That’s when I first discovered Doug Goodman, then working at IBM. He had developed a 2D optical imaging simulator and his papers on partial coherence approached the topic in a novel and enlightening way. I still had to read several other treatments before the ideas finally sunk in, but I instantly recognized that Doug Goodman had a unique way of explaining things. Taking a short course from him in the late 1980s cemented this opinion. When I needed to understand the impact of illumination aberrations on imaging about a decade later, I again turned to Doug’s papers to teach me.

I liked Doug because he was wicked smart, but also because he was quirky, with an odd and irreverent sense of humor that I always appreciated. He worked at IBM during the golden years of applied research, and was one of the extremely talented group of scientists and engineers working in lithography that so impressed me about IBM.

Doug loved to explain things on many different levels, especially using demonstrations. His classic 1995 paper “Optics demonstrations with an overhead projector” became a short course and then a book. Long after the tech world embraced Powerpoint and LCD projectors, Doug still gave talks with an overhead projector and hand-written transparencies, very much in a classic professorial style. The last paper I saw him give was at an SPIE lithography conference in 2004. The organizers had to dig up an overhead projector just for him. The topic was how to explain partial coherence. His talk included the use of a pyrex pan full of water.

Doug left IBM to work for Polaroid in 1993, and I saw him less frequently as he strayed from my field of lithography. I was glad to see him come back to lithography when he became a senior scientist at Corning Tropel in 2002. By then, the advance of his Parkinson’s disease was plain to see. He retired in 2007 and died on May 14, 2012 at the age of 65. Too young. He is missed.

Some links to obituaries for Doug:
http://spie.org/x87302.xml
http://www.osa.org/About_Osa/Newsroom/Obituaries/Goodman-Douglas.aspx
http://www.optics.arizona.edu/News/2012Newsletters/2012goodman-douglas-s.htm
http://hosting-25262.tributes.com/show/Douglas-S.-Goodman-93849644

The Power of Belief

Have you heard of power bands? The most popular brand is Power Balance, a company which “blend[s] the powers of Eastern Philosophy and Western Science with Innovative Technologies to deliver products that improve and enhance people’s lives.” Sounds impressive, eh? A power band (described by Power Balance as a “sports performance wristband”) is a silicone bracelet with holograms that “resonate with and respond to the natural energy field of the body.” [Unless you buy one from Lifestrength, a competing company whose identical-looking bracelets create “negative ions”.] According to numerous athletes paid to endorse the product, it really works.

There is only one problem. They cost $30. That’s a lot of money, even if it is virtually guaranteed to improve my life. That why I decided to buy a Placebo Band. It works in exactly the same way as the Power Balance band, with exactly the same results. But it only costs $4! What a deal! I couldn’t pass it up. Now I wear the power of belief around my wrist wherever I go. Shouldn’t you?

The Resolution Limit of Hard Drive Manufacturing

In lithography, pushing the limits of resolution is what we do. These efforts tend to get a lot of press. After all, the IC technology nodes are named after the smallest nominal dimensions printed with lithography (though the marketing folks who decide whether the next generation will be called the 16-nm or 14-nm node don’t care much about the opinions of lithographers). And the looming end of lithographic scaling has gotten all of us worried – regardless of your faith in EUV. Yes, resolution is the signature (though not the only) accomplishment of lithographers. That is why it is so important to carefully define what we mean by the term ‘resolution’ and understand why it is different for different tasks.

As I have said many times in papers, courses, and my textbook, the resolution one can achieve depends critically on the type of feature one is trying to print. In particular, the nature and limits of resolution are very different for dense patterns as compared to isolated patterns. For the last 10 years or so, the IC industry has been focused almost exclusively on pitch resolution – the smallest possible packing of dense lines and spaces. In optical lithography this resolution depends on the ratio of the wavelength (λ) to the imaging system numerical aperture (NA). For a single, standard lithographic patterning step there is a hard cut-off: the half-pitch will never drop below 0.25λ/NA (i.e., the pre-factor in this equation, called k1, has a lower limit of 0.25).

For 193-nm lithography, the NA has reached its maximum value of 1.35, so that the dense pattern resolution has bottomed out at a pitch of 80 nm. To go lower, one must use double patterning, or wait for Extreme Ultraviolet (EUV) lithography tools to drop the wavelength. Either way is costly, and the proper path past a 40-nm pitch is currently unknown.

But the resolution limit for an isolated feature is not so clear cut. While resolution still scales as λ/NA, there is no hard cut-off for k1. As k1 is lowered, lithography just gets harder. In particular, control of the feature width (called the critical dimension, CD) is harder as k1 goes lower. Thus, for isolated lines, resolution is all about CD control.

And that’s where lithography for hard drive read/write head manufacturing differs from IC manufacturing. When manufacturers like Seagate and Western Digital increase the areal density of their drives, you can bet there was a shrink in the feature size on some critical geometry of the read and write heads. And that feature is an isolated line printed with optical lithography.

So how small are the smallest isolated features printed at Seagate and Western Digital? While I don’t have the exact values, I do know they are on the same order as the smallest features obtained by IC lithography – when double patterning is used. In other words, today’s hard drive manufacturing requires 2x-nm lithography (isolated lines) using single patterning.

The CD control requirements for these critical features is about the same as for IC critical features: +/- 10% or so. Overlay is critical too, but maybe a bit relaxed compared to the standard 1/4 – 1/3 of feature size that is the rule of thumb in the IC world. But there are a few extra requirements that make read/write head litho challenging. The wafers are smaller than the standard 300mm diameter (it is a thick ceramic wafer, not silicon), with no plans for a change to 300 mm. On each wafer, tens of thousands of heads are made (the standard lot size is one to four wafers), so throughput is not quite as critical as for ICs. But this also means that none of the latest generation of lithography tools (such as 193 immersion) are available for this task (they are all 300-mm only tools). Not that these guys would buy an immersion tool anyway – hard disk manufacturing is extremely cost sensitive, so they make do with lower-NA 193 dry tools.

So let’s do the math. To print 2x-nm features with a moderate-NA 193 dry tool, the hard drive makers are doing single-pattern lithography with k1 below 0.1. This is remarkable! The IC lithographers have never attempted such a feat. How is it done? Of course, you use the strongest resolution enhancement techniques from the IC world you can find. After that, it’s all about CD control, which means attention to detail. Let’s give the hard drive folks the credit they deserve: lithography at k1 < 0.1 is hard. Lithography scaling pressures are at least as fierce in the hard drive world as in the IC world, so you can bet the minimum isolated line feature size will continue to shrink. It will be interesting to see how they do it.

Aloha Lithography!

An excuse to travel to Hawaii? You don’t have to ask me twice. Especially if it is the Big Island, my favorite of the Hawaiian isles. My excuse this time? The 3-beams conference, also called triple-beams, EIPBN, or occasionally (rarely) the International Conference on Electron, Ion and Photon Beam Technology & Nanofabrication.

The conference was held last week (May 29 – June 1) at the excessively large Hilton Waikoloa Resort, where if I chose not to take the train or the boat from the lobby to my room, I could make the 15 minute walk instead. With the ocean, a lagoon full of sea turtles, dolphins to wonder over, and too many pools to count, one could easily spend a week’s vacation here without ever leaving the resort – which is no way to spend a vacation on the Big Island.

But I wasn’t here on vacation! I was here on business. OK, the conference was three days and I stayed for eight, but seriously, I was here for the conference. And so I diligently attended papers, ignoring the texts from my wife telling me which pool she was going to next.

Things began on Wednesday with the three plenary talks. Only later did it occur to me that they were of a common theme: optical lithography as the engine of scaling is reaching its nadir, so what will come next? Burn Lin, lithography legend and VP of TSMC, gave his now-familiar pitch for massively parallel e-beam direct write on wafer. His analysis is always insightful, but because development of a practical e-beam solution is still 5 years away (I’m being optimistic here), there was an all-too-common bias in his thinking: the devil we don’t know (e-beam) is better than the devil we do know (EUV). Since Extreme Ultraviolet lithography is at the end of its 20 year development cycle, we know all about the problems that could still kill the program. Since massively parallel e-beam wafer lithography is far behind, it is likely that we haven’t seen the worst problems yet (how bad will overlay be, for example?). And in fact, some problems are the same, such as line-edge roughness limiting the practical sensitivity of any resist system.

Matt Nowak of Qualcomm gave a great review of 3D integration through chip stacking. If Nvidia and Broadcom are right and litho scaling below 22-nm doesn’t yield either better-performing or lower-cost transistors, what is next? Innovations in packaging. While not as sexy as wafer processing, packaging adds a lot to the cost of an IC. And with 3D chip stacking, it is likely that packing costs would go down, system performance would go up, and we even might be able to lower wafer costs by better dividing up functionality between chips. It won’t be long before 3D integration is the new standard of system (chip) integration.

Finally, Mark Pinto of Applied Materials showed a very different example of what to do when silicon scaling begins to fail: go into another market. In this case, the market is silicon photovoltaics (PV). Historically, the PV market’s version of Moore’s Law has shown a 20% decline in cost/Watt for every doubling in installed capacity. That trend seems to be accelerating of late, with commercial installations now running at under $1/W. Grid parity, where the cost of solar electricity equals or is less than the market cost of electricity, has been reached in Hawaii and in several countries (even without accounting for the cost of carbon). The trends all look good, and solar is a good market for Applied.

After the plenary, it was off to the regular papers, with their interesting mix of the practical and the far out. First, an update on what I heard about EUV.

Imec has been running an ASML NXE:3100 for a year now, and its higher throughput means that process development is much easier compared to the days of the old alpha demo tool (ADT). Still, “higher throughput” is a relative term. The most wafers that Imec has run through their 3100 continuously is one lot – 23 wafers – taking about five hours. Thirteen minutes per wafer is a big improvement over several hours per wafer, but still far from adequate.

In the hallways, I heard complaints about $150,000 per EUV mask, and EUV resist at $40K per gallon. Everyone expects these prices to go down when (or if) EUV moves into high volume manufacturing, but anyone who thinks that EUV resists or masks will ever be cheaper than 193 resists or masks just isn’t thinking well. EUV may be Extreme, but it is also Expensive.

There were many excellent papers this year. JSR gave a great talk on some fundamental studies of line-edge roughness (LER) in EUV resists, developing some experimental techniques that were fabulous. A talk from the University of Houston explored the use of small-angle X-ray scattering to measure latent images in chemically amplified resists. Although promising, this techniques will need massive control and characterization to yield quantitative results.

Paul Petric of KLA-Tencor described progress on their e-beam lithography tool, REBL. We still have two years before an alpha tool might be ready to ship to a customer. Richard Blaikie from New Zealand gave a great talk on evanescent interference lithography, though I might be biased in my opinion since I was a co-author.

I had a few hallway conversations with folks about scaling. The economic barrier of double patterning means that pitch has stopped scaling for some levels. Metal 1, in particular, is stuck at an 80-nm pitch (it looks like for three nodes now), the smallest that 193 immersion can print in a single pattern. It seems likely that double patterning will have to be used at Metal 1 for the 14-nm node to bring the pitch down to 64 nm. The fin pitch for finFETs must scale, so self-aligned double patterning (SADP) is being used at the 22-nm node, but what will happen when the double patterning pitch limit of 40 nm is reached? The economics of litho scaling looks very ugly for the next few years, with a very real possibility that we just won’t do it (or maybe no one but Intel will do it).

On the last day of the conference there a slew of good papers on directed self-assembly (DSA), the hottest topic in the lithography world right now. Progress towards practicality is rapid, and universities continue to churn out interesting variations. IBM discussed the possibility of using DSA for fin patterning below 40-nm pitch. They seem very serious about this approach.

Some of my favorite quotes of the week:

Referring to the molten tin sources used for EUV, Jim Thackeray of Dow said “If nature can do volcanos, maybe we can do EUV.”
Referring to EUV resists that can also be used for e-beam lithography, Michael Guillorn of IBM said “In my opinion, this is the best thing we got from the EUV program.”
Referring to problems making the DPG chip at the heart of the REBL system, Paul Petric of KLA-Tencor said “Making tools for making chips is easier than making chips.”

It was a good conference and a fun trip, and now I’m back home, but many of my fellow conference attendees are not. Vivek Bakshi’s EUV workshop was this week in Maui, and next week is the VLSI Technology and Circuits Symposium in Honolulu. I know several folks were able to convince their bosses that a three-week, three-island business trip was required. At the VLSI symposium, one of the evening rump sessions is entitled “Patterning in a non-planar world – EUV, DW or tricky-193?” Patterning is on everyone’s mind now, even chip designers’. So much attention is generally not a good thing. But us lithographers can expect even more attention over the next 12 months, as the industry makes some of the most difficult choices it has ever made in its 50 year history.

Word of the Day

Of all the things I am proud of about myself, my vocabulary is not one of them. I’m constantly confronted by words that I don’t know, but strongly suspect that I should. When I stumble across such unfathomable verbum I usually just pick myself up and hope that no one noticed. But occasionally I reach for a dictionary in a fit of self-improvement. Today was that day, and the word was “prolixity”.

I know, dear reader. You probably learned this word in the third grade (along with its Latin roots and conjugations) and used in conversation with your mother this week. But I was forced to look it up. And when I did, something profound happened. I was deeply disappointed with the quality of the dictionary definition of this word. So disappointed, in fact, that I took the time to carefully construct what I think is a far superior definition. So without further ado, bother, or protest, I unveil now to the world my definition:

prolixity: 1) the tendency to say things in far more words than is necessary to effectively make a point or convey the essence of a thought; 2) wordiness

To all the lexicographers who read my blog, please feel free to make use of this superior definition. Credit, of course, would be appreciated.

Bumper Sticker Logic

Of course, to speak without fully considering the implications of what is said is a part of the human condition. One of my favorite phrase-types in this genre is “God Bless ____”, where the blank can be “America”, “Our Troops”, or just about anything. I’m sure the primary sentiment is one of support for the putative object of blessing, but it doesn’t take much reflection to realize there is more to it than that. “God Bless America” is really the first half of a full thought, with the unstated second half being “but not other countries”. I’ve never heard anyone say “God bless the world”, and I’m not sure what the point of a blessing would be if not to confer some benefit not available to the unblessed. Personally, I don’t want God to bless Americans to the exclusion of non-Americans, but I suppose there are many people in my country who do.

“God Bless Our Troops” is even more problematic, since its purpose is undoubtedly to ask God to take sides in a current or future armed conflict. A God that was willing to take our side in most of the wars that America has fought (thus ignoring the equally fervent prayers of the other side) is too petty for my liking.

Which brings me to a recent encounter with bumper sticker philosophy. The other day, driving the roads of Austin, Texas, I saw the following bumper sticker, which takes this archetype to a new level:

God Bless Our Troops, Especially Our Snipers

Apparently, not only do our military personnel deserve blessings to the exclusion of other country’s militaries, but within our own armed forces we should expect those trained to be snipers to get extra blessings. And what blessing should a sniper receive? To become a better shot?

I’m not sure that this bumper sticker’s owner has fully thought through all of the implications of the slogan on display. My fear is that he has.

A Poem by Sarah

Sarah reading her poem
This morning my daughter’s first grade class had a “poetry cafe”, with parents invited to listen to kids read their original poems. Here is one of the two poems that my six-year-old Sarah wrote and read:

The Dance Recital

The grass dances gracefully
to the beautiful music
of the wind.
And the Blue Bonnets
in their beautiful dresses
dance for the dirt
with nothing in it.

I can say with some certainty that she doesn’t get her artistic talents from me. She is already a better poet.