- Home |
- Blog |
- Lithography |
- Published Works |
- For Profit |
- About Chris
- | Gentleman
- | Scientist
San Jose, California, February 23 – 26, 2026
(The following diary appeared first as a daily blog at life.lithoguru.com and is reproduced here in a slightly edited form.)
SPIE
Advanced Lithography and Patterning Symposium 2026 – day 0
Fifty
years is a long time, even for an old guy like me.
That is how long it’s been since the first SPIE lithography
conference. That 1976
conference was held in San Jose and had 26 papers on most of the
expected topics: masks,
metrology, exposure tools, resist processing, and even X-Ray
lithography. Three
papers were in a special session on making chips for the Viking Mars
Lander. According to the
introduction to the conference proceedings (SPIE Volume 80) by
conference chair James Giffin, “The meeting was both timely and
useful, since semiconductor microlithography is recognized by many
in the electronics industry as being the most important process used
in the manufacture of complex semiconductor devices.”
It is striking to me that this description would have been
applicable to every SPIE lithography conference since, including the
one happening in San Jose this week.
So is his last sentence in that introduction: “Ample
opportunity was provided to discuss the subject matter with fellow
professionals in the field and to explore newly emerging ideas
during the panel discussions.”
The
Advanced Lithography and Patterning Symposium has grown
significantly in those fifty years, as has the entire semiconductor
industry, but the core value of the now six conferences that make up
the meeting remains the same.
One slight difference is that this year’s panel discussions
will be looking backwards rather than forward, in honor of this
fiftieth anniversary.
I’ll be on that panel on Monday night (thanks mostly to my advanced
age – I’ve been to every SPIE lithography conference since my first
in 1985) hoping to glean the important lessons from the past and how
they might apply to the future.
And the
future is what this symposium is all about – the future of
lithography, and as a consequence semiconductor manufacturing, the
electronics industry, AI, and just about every other thing about
modern life that makes it, well, modern.
Working in lithography all these years has been many things
for me: exciting, energetic, educational, stressful, fast-paced,
financially rewarding, sometimes frustrating, but never boring.
Mostly I am grateful to be in a community that has given me a
welcoming professional home and many lifelong friends.
It is good to be back in San Jose!
The 51st
SPIE lithography symposium in San Jose has grown from last year,
with more the 2,500 attendees and 550 abstracts accepted.
At the plenary session Andreas Erdmann of the Fraunhofer
Institute received the prestigious Frits Zernike Award in
Microlithography for his important work in lithography simulation.
His many contributions to simulating 3D mask effects in
Extreme Ultraviolet (EUV) lithography have been especially valuable.
Congratulations, Andreas!
We also saw three new SPIE Fellows being introduced:
Toshiro Itani, Frank Schellenberg, and Tadahiro Takigawa.
The
first plenary speaker was Unoh Kwon of SK hynix who talked about the
importance of high bandwidth memory (HBM), especially DRAM, to the
growth of artificial intelligence (AI).
As he said, “The bottleneck of AI systems is shifting from
compute to memory.”
Given how much money Nvidia has made from the compute side of AI,
this is a welcome development for memory makers, who only a year ago
were in a less desirable financial environment.
As leading-edge memory makers shift to filling the HBM
demand, the supply of all DRAM is falling behind demand with
predictable results.
(This is good for those DRAM makers; not so much for anyone who
needs to buy memory of any kind.)
Kwon’s excellent talk described the AI need for high
bandwidth (i.e., speed), high capacity, and low power, resulting in
the use of wide I/O channels, packaging memory close to the GPU, and
stacking the DRAM chips higher using through-silicon vias (TSV).
The latest HBM are stacking 16 DRAM chips in one package
(still under 1 mm tall) to give up to 48 GB capacity, though power
consumption is still too high.
Hui
Peng Koh, General Manager of Global Foundries’ Fab 8 in Malta, NY,
gave the second plenary talk on managing a high-mix,
non-leading-edge foundry.
Global Foundries’ profile was significantly raised during the
pandemic when supply chain disruptions meant many customers
(especially automakers) couldn’t get enough chips.
As Koh said, “Supply chains optimized globally for efficiency
are not always resilient in the face of disruptions,” which Global
Foundries has sought to address by spreading fabs with redundant
manufacturing capabilities throughout the world.
In a topic that is of great interest to me, she described how
photonics chips, with relatively large feature sizes, demand extreme
manufacturing precision.
Optical waveguides need very low line-edge roughness (LER) to
prevent optical loss from scattering.
My favorite quote: “LER is not just a metric – it’s a
performance limiter.”
At the
metrology conference later in the morning there was a brief memorial
to Alok Vaid who died in the past year (way too young), followed by
a history of the conference on it’s 40th anniversary.
And it was during this history overview that I was again
reminded of the immense philosophical problem, studied as far back
as the 13th century by Thomas Aquinas, called
bilocation: you
can’t be in two places at the same time.
Before Nivea Schuch reached the fourth decade in her review
of metrology milestones I had to leave for the resist conference in
order to see Luciana Meli of IBM.
The expected transition from nanosheet transistors (used at
the 3 or 2 nm nodes) to nanostack transistors (expected sometime
below the 10A node) will be limited not as much by resolution as by
edge placement error (EPE) control.
According to Meli, High-NA EUV lithography will provide some
relief from the Stochastics Resolution Gap, but only for a while.
By the time we reach the nanostack transistor era we will be
back to that ugly trade-off between stochastics errors (manifest as
EPE) and exposure dose.
Jumping
again to the metrology conference, Steve McCandless of Micron talked
about the use of AI and machine learning (ML) in metrology.
He assured the metrologists in the room that by reducing time
to solution, “AI [was] not here to take our jobs, but to free up our
weekends.” (While I hope
that is true in semiconductor manufacturing, I’m sure it won’t be
true in many other professions.)
Most of the applications he described use ML’s incredible
ability to interpolate: train
a model with accurate metrology data (or simulation data) at various
important conditions and let it fill in “virtual” results easily and
cheaply at others. While
many hope that AI can also do a good job of extrapolating, I have my
doubts. Even knowing
when an AI result has been interpolated versus extrapolated can be
difficult, which of course leads to the biggest roadblock to the
widespread use of AI in metrology: trust.
Later that afternoon Danah Kim of Gauss Labs talked about
their use of “virtual metrology” for tool-to-tool matching, and
“trust” was the word that kept going through my mind.
Towards
the close of the day I was pleased to see extensive data on High-NA
EUV single patterning of small tip-to-tip (T2T) dimensions.
From my experience, low-NA printing of 15 nm tip-to-tip CD at
a tight pitch (28 nm) results in very high T2T local CD uniformity
(LCDU) – between 6 and 8 nm.
That’s a yield-limiting amount of variation.
Shruti Jambaldinni of Lam showed that High-NA EUV can print
even smaller T2T CD at a pitch of 20 nm with LCDU between 3 and 4
nm. She optimized their
LAM dry resist absorption versus depth, plus illumination shape and
mask absorber choice, to push the T2T LCDU down from 4 nm to 3 nm,
though etch bias pushed that benefit to larger T2T CD.
The last talk of the day for me was by Yeongchan Cho of
Samsung, describing the printing of square arrays of contact holes
at the resolution limit of 0.33 NA EUV single printing.
These 30 nm pitch holes could only be printed using a
clear-field mask and negative tone metal-oxide resist after
extensive source-mask optimization.
I think there were some other tricks involved as well that
Mr. Cho did not mention.
The
long first day of the symposium ended with a panel discussion
commemorating its 50th anniversary.
I was honored to be on the panel with Burn Lin, Martin van
den Brink, Grant Willson, and Janice Golda as we talked about a few
of the lessons learned during those exciting fifty years.
Dan Hutcheson chaired the panel using a talk show-like
interview mode that worked very well, soliciting a few of the many
fascinating stories that all of us have in abundance.
With a theme of “making the impossible possible”, it is clear
that the next half-century of this conference will see many other
“impossible” challenges overcome.
SPIE
Advanced Lithography and Patterning Symposium 2026 – day 2
Day two
began at 8:00am with four papers I wanted to see, a philosophical
problem known as multilocation.
With no best way to decide, I threw a d4 die and landed at
the talk by Kenji Yamazoe of TSMC.
It turned out to be a fun choice since I loved the rigorous
mathematical derivation he gave to define the theoretical maximum
NILS (normalized image log-slope) versus corner rounding radius for
the aerial image of a corner.
David
Fried of Lam Research discussed his company’s massive efforts to
create “virtual twins” of Lam equipment.
What is a virtual twin?
As used by Fried, it is what we used to call multiscale
modeling. Thus, a
virtual twin of an etch tool would model that tool at the equipment
scale (mechanical drawings, power consumption, throughout), reactor
scale (chamber physics of flows and energy leading to wafer
uniformity of reactants), near-feature scale (etch behavior as a
function of feature density), the feature scale (simulation of the
3D etched patterns), and the subatomic scale (molecular modeling of
the chemistry). An
effective virtual twin leads to “virtual experimentation” – running
the model. At different
scales this could lead to better chamber design or an optimized etch
recipe. A quote from the
presentation: “Edge placement error is really what limits scaling.”
Bob
Socha gave my favorite talk of the conference so far:
“Simulation-driven lithography innovation: honoring the legacy of
Prof. Andrew R. Neureuther.” Prof. Neureuther died last summer after
a brief illness at the age of 84, leaving behind massive
accomplishments in lithography and patterning and generations of
students indebted to him.
Bob did a fantastic job of capturing this legacy both from a
technical and a personal level.
I too am indebted to Andy for his inspiring work and his
friendship over many years.
He is missed.
Gopal
Kenath of IBM discussed linewidth roughness (LWR) versus focus as
the limiter of focus tolerance in gate single patterning using 0.33
NA EUV. While the
industry has come to rely on two-beam imaging (through off-axis
illumination) to maximize depth of focus, Gopal revisited the
trade-offs of two-beam versus three-beam image in light of
stochastics. With three
beams (think conventional illumination) we have higher NILS near
best focus, but a faster fall-off with focus compared to two-beam
imaging. But if LWR
limits focus tolerance, does anything change in this trade-off?
Probably not, but it is worth considering using a stochastics
focus.
Many
people have been talking about ASML’s announcement of a 1000-Watt
EUV light source, and Haining Wang gave a talk with the details of
this milestone.
Specifically, ASML has shown stable operation of the source for one
hour under full dose control.
He noted that this milestone for their 600W source was
announced in 2023, and that source began shipping to customers two
years later. How was
1000W achieved? Lots of
optimizations and improvements were required, but the main factor
was the repetition rate of the laser and tin droplet generator,
which increased from 62 kHz to 100 kHz.
The rate at which these droplets are produced, then blasted
to oblivion to produce light, is astounding.
The management of the heat when this intense light is
reflected off the many mirrors in the system is no small feat
either.
Bernardo Oyarzun of ASML discussed a recurring theme, that focus
tolerance is limited by stochastics.
Using e-beam defect inspection over a large enough area to
achieve one part per million defect capture rates, he showed how the
“defect-free depth of focus” can be used to characterize a
patterning process.
By the
afternoon, I was listening to many machine learning (ML) papers (not
my favorite way to spend an afternoon, but unavoidable at this
conference given the very large number of papers on the topic).
Talks on image denoising in particular do not excite me, but
there are some very good applications of ML worth discussing.
As I mentioned in my post yesterday, ML is especially good at
interpolation, but a second major application is as a correlation
engine. Fabs have for
decades looked for correlations between metrology data and sensor
signals to device yield and performance.
ML can do such correlation searches even better, including
massive context data as described by Sven Boese of KLA.
Saumaya
Gulati of Lam gave one of the many, many Lam Research talks this
week on “3D engineered” dry resists.
Dry deposition of a resist provides a unique opportunity to
tailor resist properties (in particular absorption) as a function of
depth, and that can be used to affect many outcomes.
I liked Gulati’s addition of line wiggling to the list of
outcomes worth considering and optimizing.
But CAR
(chemically amplified resist) is not without its depth-dependent
knobs. B. Rafael-Naab of
Qnity (a spinout of DuPont’s electronics materials business with a
name I’m not sure I will ever get used to) showed that absorption in
a CAR can be increased with the addition of fluorine.
The resulting absorbed energy gradient can lead to top loss
and heavily sloped profiles at the typical 50 nm resist thickness.
However, by tweaking PDQ (photodecomposable quencher)
formulation/polarity to affect its attraction to the top of the
resist film while minimizing other compositional gradients, a
vertical profile can be achieved even for this higher absorption.
Toshiya
Okamura of EMD gave a third alternative (neither CAR nor metal-oxide
resist) for pushing the resolution limit of EUV.
Their MRX is a small molecule, non-CAR, crosslinking negative
tone material with the additional benefit of being PFAS free.
The material seemed to be based on free radical chain
reactions to achieve the needed sensitivity.
With a 20 nm resist thickness, the 24 nm pitch line/space
patterns from 0.33 NA EUV printing looked reasonably good.
I
dedicated my afternoon to the resist conference, though it meant I
missed the talks and discussion in the “future of EUV” session going
on at the same time. It
was worth it, however, if nothing else but for the great talk by
Chenyun Yuan of Cornell.
One way to address the resist’s role in stochastics is to reduce
compositional variation.
Yuan did that is two ways, by making a monomolecular resist (a
single component), and by making that polymer “sequence-defined”,
meaning that every individual component is attached to the backbone
of the polymer at the same spot for each polymer.
The polypeptoid resist that he made has no additional
sensitizer, is negative tone, does not require post-exposure bake,
and is spin coated to about 25 nm thickness.
Initial printing results look very encouraging, and I am
looking forward to seeing further progress of this material.
Since I
spent the afternoon listening to resist talks, I felt I had earned
the hospitality of the resist companies as I went to their parties
that night. As the
dolphins once said, “Thanks for all the fish.”
SPIE
Advanced Lithography and Patterning Symposium 2026 – day 3
I don’t
quite understand it, but it is a thing:
many attendees take a picture of every slide of every talk
they attend. Maybe it is
for trip reports they are required to produce?
I’ve learned to tune it out so that this behavior no longer
interferes with my ability to concentrate on the presenter (mostly).
Last year at the Photomask and EUV Lithography conference I
put a link on my first slide so that attendees could download the
slides rather than taking pictures of them.
This year, IBM has done one better – every talk they are
giving at this conference has a QR code on the title slide that goes
straight to the slides for viewing or download.
Genius! I hope
this becomes a permanent trend copied by all.
Wednesday saw some very good talks.
Nischal Dhungana of the University of Grenoble used CD-SAXS
(small angle x-ray scattering) to measure linewidth roughness (LWR)
of a group of line/space features.
I have to admit I didn’t follow how it works, but since the
SAXS measurement is done in the Fourier Plane the output is (after
some sorting) an almost direct measurement of the PSD (power
spectral density). Much
work remains, so we’ll have to see where this goes.
Erik
Simons of Nearfield Instruments described a very interesting
approach to make Atomic Force Microscope (AFM) measurements of
extremely small features with much higher accuracy.
In order to measure small trenches, the AFM probe must be
long and narrow to fit in the trench.
But a long, narrow probe will bend when near the sidewall of
a feature due to Van der Waals forces, causing considerable error in
the data. Their solution
is to measure the twisting of the cantilever holding the probe with
a laser, then model the additional bending of the probe given that
data. Knowing the
bending allows the data to be corrected.
I don’t know how this might affect tip shape deconvolution (a
point that Simons skipped over and a perennial difficulty for AFMs),
but they seem to be on a roadmap to better accuracy.
Roberto
Fallica of imec studied line wiggling, a problem of growing
importance as line/space feature sizes shrink, using the PSD of the
pattern placement roughness (PPR).
Most line wiggling metrics make use of the LER and LWR, so
I’ll have to think more about the information available in the PPR.
Dario Goldfarb of IBM showed how High-NA EUV patterning of
arrays of holes produced very low local CD uniformity (LCDU).
Numbers less than 1.5 nm are very encouraging.
Kevin
Dorney of imec did such a good job with his talk on the effects of
the environment on metal oxide resists that I did not even mind
seeing dozens of IR spectra.
The systematic way that imec has worked on this important
puzzle shows how science should be done.
Varun Kakkar of ASML looked at the correlation between
contact hole LCDU and another important stochastic effect, local
pattern placement errors (LPPE).
LPPE characterizes the deviation of the center of each hole
from a perfect grid and can be correlated with LCDU.
I’m not sure why that correlation matters, but I’m going to
think about it. Wongi
Park of Samsung showed in the next talk that any measurement of LPPE
must include the measurement and removal of SEM distortion if
accuracy in to be expected.
He showed removal of only low-order terms (translation,
rotation, and magnification), but higher order effects can also be
removed with enough data.
I ended
the day by going to Robert Bristol’s first talk as a Fractilia
employee. Since I am a
coauthor on the paper (and Robert’s boss), my opinion is definitely
biased, but I think he did a great job.
And it was an important topic.
Working with Nanya on a DRAM manufacturing process we found a
good stochastics metric that correlates well with end-of-line yield:
line segment unbiased LCDU.
The
poster session was massive (almost overwhelming), but spread out
enough so that it was easy to move around and enjoy the posters.
SPIE
Advanced Lithography and Patterning Symposium 2026 – day 4
Beginning the morning in the metrology session, Yasuhiro Shirasaki
of Hitachi High Technologies used the electrical behavior of a SEM
to look at more than just images.
As Voltage Contrast mode shows, the electrons from a SEM beam
can probe the electrical properties of a point on the wafer.
Here, a measurement of charge (as indicated by the energy of
the secondaries) versus time was used to measure gate leakage:
charge up a transistor and see how long it takes to
dissipate. Toshimasa
Kameda, also of Hitachi, looked at different SEM voltages and
signals to try to investigate profile changes during self-aligned
quadruple patterning (SAQP).
A voltage of 5keV maximized sensitivity of the linescan to
top profile asymmetry (by comparing the linescan midpoint between
left and right edges at a threshold of 0.5 versus 0.9).
Differences in the pattern depth of the different spaces was
estimated using the space width divided by the graylevel of the
space at a voltage of 300 V.
I suspect, however, that both of these metrics are sensitive
to a variety of factors, not just the SAQP profile shape.
Pushkar
Sathe of NIST created synthetic SEM images that were then measured
to discern the sensitivity of LER measurement to SEM Noise, SEM
contrast, and feature geometry.
There was nothing about this study that I liked.
The synthetic SEM images were exceptionally simplified and
thus not representative of real SEMs.
The “LER” was in fact just a single jog in a line of various
of amplitudes and lengths.
And finally, the measurement of LER used image processing
techniques that did not represent anyone’s best practices for LER
measurement. I don’t
think his results are useful.
I
always try to make a point of attending talks by Ryosuke Kizu of the
National Institute of Advanced Industrial Science and Technology
(Japan), since they are always full of careful work and good
science. The same was
true this year despite it being one of too many talks using machine
learning for metrology.
Kizu’s goal was an ambitious one:
get a good LER measurement from a single noisy image with 12
features. Did he
succeed? As is usually
the case with new machine learning studies, the answer is maybe.
He defined three loss functions that were intended
specifically for this problem, trained on a modest number of image
(508), and showed decent results.
However, success in the lab and success in the fab can be
very different, and much more testing will be required to see if a
model trained on the past can adequately evaluate an uncertain
future. Machine learning is very good at interpolation, but not so
good at extrapolation.
As an
aside, I’m happy to report that Kizu described his approach as Deep
Learning rather than AI.
It has become trendy to relabel all machine learning approaches as
“artificial intelligence” in order to capture a bit of the current
hype and euphoria around AI.
I don’t like it.
If you used machine learning, call it that.
EunKyeong Jong of SK Hynix, in a talk with Applied Materials, looked
at contact hole shape metrics in addition to CD in order to
characterize stochastics.
I’ve been promoting this concept for many years, so I am glad
to see it catching on.
The two metrics were called striation and triangularity, though they
were not defined in the talk so I had a hard time interpreting
results.
Shubhankar Das of imec gave one of many talks at this conference
pushing the limits of tip-to-tip spacing using high-NA EUV single
patterning and dry metal oxide resists.
It is important to know how far one can push that CD without
the use of directional etch, since directional etchers are very
expensive! I liked one
graph of his in particular, showing a nice parabolic response of
spacewidth roughness to focus.
There have been many talks at this conference (including by
Fractilia) that indicate stochastics metrics can be better at
detecting a focus drift than CD.
I
learned of a new high-NA EUV stitching technique from Natalia
Davydova of ASML called Block and Route.
The EDA (electronic design automation) step of floor planning
can be used to place the IP blocks in a chip so that the (now
usually jagged) stitching region falls between these major
functional blocks. This
means that stitching only happens at later metal layers (wiring the
functional blocks together) where small stitching errors have less
impact.
I went
to only one talk in the Novel Patterning Technologies conference all
week, a consequence of too many parallel sessions.
The last talk of that conference was by Bodil Holst of Lace
Lithography (Norway), and I made a point of seeing in because Dr.
Holst had reached out to me last year on her topic of metastable
atom lithography. (Full
disclosure – I have no financial interests in Lace Lithography, but
I gave some informal advice to them about the talk.
I hope the advice was worth the price – free.)
This new lithography company
is nothing if not ambitious.
Using metastable neutral atoms of Helium (energy = 20 eV,
wavelength = 0.1 nm) they demonstrated the first printing results of
their prototype lithography tool.
That wavelength and energy are quite nice for exposing a
monolayer of resist, but the challenges are immense.
How do you pattern transfer a monolayer of resist (even more
challenging than the top surface imaging approaches used 30 years
ago)? The mask is a
silicon nitride stencil membrane, with well-known problems of
manufacturability and stability (though their holography-inspired
nearfield imaging approach allows for struts to be placed within the
pattern). Overlay has
yet to be addressed.
Still, it was fun to see such an audacious attempt to move the
needle on resolution by a very large amount.
In the
afternoon I saw some of the talks on high-NA EUV readiness for high
volume manufacturing.
The bottom line: very good progress.
In the last few months the first EXE:5200B was qualified,
ASML’s target model for production.
HVM qualification, however, is still ongoing.
Marie Krysak of Intel discussed that company’s experience at
replacing a three-mask SALELE (self-aligned litho-etch-litho-etch)
process at 0.33 NA with a single-mask high-NA EUV print.
One quote:
“Random variability has replaced overlay as the largest component of
total EPE budget.” She
also mentioned an oft-neglected benefit of reducing line/space
roughness: reduced false
defects during optical defect inspection.
The
last talks of the week were in the metrology conference.
KLA and imec gave a talk about using a calibrated stochastics
lithography simulator (PROLITH), accelerated with machine learning,
to predict contact hole defectivity.
One interesting outcome when simulating defectivity through
focus was that best focus (minimum defectivity) was not the same for
missing holes as for merged holes.
Elisa
Novelli of IBM gave a talk (I am a co-author) on the importance and
difficulty of measuring small contact holes.
A square array of 45 nm pitch holes through dose produced a
very wide range of hole sizes.
Two CD-SEMs from different manufacturers were used to measure
those wafers (using the manufacturer’s BKM), and then MetroLER
measured hole CDs using the same sets of images.
Predictably, none of the four sets of results matched very
well (though MetroLER matched the two CD-SEMs the best).
But one CD-SEM failed almost completely to measure holes
below about 12 – 13 nm in diameter when the pixel size was 0.5 nm.
This prompted a pixel size study that included trying to
understand the influence of sample damage.
The results were very interesting, but the message I got was
very clear: Your current
approach for measuring contact holes may not work as we push CDs
lower (for example, with high-NA EUV).
Don’t take your current metrology for granted.
Philipp
A. Wieser of Brookhaven National Lab. looked at the measurement of
resist line/space patterns using CD-SAXS and quantified the damage
to the resist caused by the x-rays.
Linewidth changed by 5% during the measurement and LER
increased, requiring efforts to reduce the x-ray dose.
Miki Isawa of Hitachi High Technologies used a combination of
secondary electron (SE) and backscattered electron (BSE) images from
a CD-SEM at 300 or 500V to try to detect if a contact hole is
scummed. When the resist is on a spin-on glass underlayer, the BSE
images clearly showed when a hole was sufficiently scummed.
I doubt the same will be true for an organic underlayer, but
combining SE and BSE images is an interesting option for at least
some applications since those BSE images essentially come for free.
With
the conference over, I can look back with some small amount of
perspective. Two themes
stand out to me:
1)
It has become accepted
wisdom that scaling is now limited by EPE, and that the largest
component of EPE is stochastics
2)
Resolution transitions (to
high-NA EUV, for example) are vastly more complicated and are
happening more slowly each time.
Item 2)
is partially a result of item 1).
The other major lesson is that AI is a huge boon for the
industry and represents a tool that all companies are trying to
figure out how to use to address items 1) and 2).
None of this is easy, but all of it is fun (at least for
someone with a twisted sense of fun like me).
Chris Mack is a writer and lithographer in Austin, Texas.
© Copyright 2026, Chris Mack.
Diaries from other lithography conferences...