It is 10 days since the SPIE Advanced Lithography Symposium, and I have finally finished and submitted all my papers! If you are interested in any of them, you can find them on the Fractilia website here:
Note that for the tutorial talk I gave I will not be preparing a paper, but you can get a copy of the slides I presented. And for those you who didn’t attend the talk, this is the way it ended:
In the second talk of the morning in the EUV session, Andrew Liang of Lam Research showed how much work it takes to optimize a new process, and how that work can pay off. Local critical dimension uniformity (LCDU) is a term that refers to stochastic-induced variation in CD. Conventionally, CDU looks at the variation of CD across a chip, exposure field, wafer, and lot caused by things like mask CD variation, film variations across the wafer, focus control across the exposure field, hotplate temperature uniformity, and many other factors. The length scale of these variations tend to be quite large compared to the pitch of the patterns being printed, so that two feature next to each other are assumed to be largely affected in the same way by all of these variations. Stochastic variations, on the other hand, have a length scale (called the correlation length) that is small compared to the feature size so that we can understand its impact by looking at any features, even ones right next to each other. By measuring the CDU of a small group features (a 7X7 array of contact holes, for example) we can isolate the stochastic impact on CD uniformity from the other CDU factors. This is the idea behind LCDU.
Liang optimized the hard mask below the resist by switching to a thinner PECVD film, optimized the lithography process to maximize the image log-slope, and optimized the etch process using atomic layer etching. The last item is the most interesting to me, since it looks like it is possible to use an etch rate that varies as a function of aspect ratio to compensate for resist CD variation. When the resist CD is too small, the aspect ratio of the hole is higher. For a typical etch process, this higher aspect ratio would cause shadowing of the etchant and a reduction in etch rate, making the small hole even smaller. But etching can also involve polymer deposition on feature sidewalls to slow etching down. If that polymer deposition slows down with higher aspect ratio, maybe it is possible to increase the etch rate when contact holes are too small, thus improving the LCDU. To me this seems like magic, but only in the sense of Arthur C. Clarke’s third law, “Any sufficiently advanced technology is indistinguishable from magic.” Others have reported on this very exciting possibility, and I am looking forward to learning more.
Ravi Bonam of IBM collected a large amount of data from a programmed roughness mask, a mask that contained an added rectangle (jog) along the feature edge of varying size and frequency. By measuring the mask and the wafer after printing, something can be learned about the optical transfer of roughness from the mask to the wafer, and the ability of wafer metrology to see roughness at specific frequencies. Unfortunately his data analysis and presentation left me unable to grasp a single lesson learned from his data. I’ll have to wait for the manuscript.
Tom Wallow gave a comprehensive overview of sources of metrology variation for the case of metrology used for OPC model calibration. His two laments were the same as from every lithography model developer. First, customers want models to fit the data better than the data uncertainty justifies. Second, models that are based on physics require data that has accuracy, not just precision. Historically, metrologists have focused on precision for the simple reason that accuracy is just too hard a problem to comprehend. Tom, I hope people absorb your lessons, but don’t hold your breath.
I presented my last paper at 2pm, and then I was mostly done for the day. After giving my last demo of Fractilia’s new MetroLER software, socializing with friends that I rarely see at other times throughout the year became my last priority of the conference. I sampled only about 10 – 15% of the papers in the symposium, and I learned a tremendous amount from them. I dub this year’s major theme to be stochastics, and I am glad for the attention that it is finally receiving. I’ll go home with many ideas to investigate and try out. For anyone interested in my papers and presentations, I’ll be posting them soon on my lithoguru website, and also on the new Fractilia website. But first I’m going home to relax.
It is just me or is 8:00am too early for the first technical talk of the day? At least on Wednesday the 8:00am talk was an excellent one. Oktay Yildirim of ASML presented a basic but very useful roughness model. Alas, I had to run out before the end of his paper to give my own paper in the metrology session. The problem with stochastics becoming the major theme of this year’s conference is that there have been stochastics papers everywhere, often conflicting with each other. The morning metrology session was all roughness measurement. Of course, I was pleased with Barton Lane’s presentation of SEM errors and their impact on roughness measurements, but since I was a coauthor that is to be expected. I also gave my own paper on a new method for roughness characterization – the level crossing method. I was especially impressed with Dr. Serap Savari’s work on applying modern algorithmic techniques for power spectral density (PSD) estimation. I guess I’m going to have to figure out what a discrete prolate spheroidal sequence is.
Ravi Bonan of IBM went back to an old idea that remains underutilized today – the programed roughness mask. Similarly, Sergey Babin of aBeam created a metrology test structure with deterministic randomness. Please don’t ask me to explain. The core concept of both is the same – create small structures with programmed “roughness” to test our measurement and analysis capabilities. More creative ideas in these regards will certainly be welcome.
A creative idea came from Harm Dillen of ASML. He used an array of very dense contact holes to measure the field distortion of scanning electron microscope images. His application was edge placement error measurement, but as Barton Lane described earlier it also impacts roughness measurements. Modeling the distortion using a typical first-order overlay model allows the systematic contribution (about 0.6 nm RMS for his data) to be subtracted out. This amount of distortion is enough to have a quite noticeable on line-edge roughness measurement. I can’t wait to try this method out.
Alex Robinson of Irresistible Materials gave a talk on increasing the sensitivity of EUV resists. I didn’t attend. But he did corner me later and run through it with me. His cartoon chemistry looked very interesting – a believable mechanism for achieving second order acid amplification chemistry. Now that such chemistry looks possible, I’ll have to think about the roughness implications more carefully. That’s the problem with stochastics – nothing is obvious the first time you think about it.
The evening ended with another round of excellent hospitality suites (thanks to all of the companies that feed me so well throughout the week), with the PROLITH party always being my favorite. For all of you who have asked me if my new company (Fractilia) will revive the traditional bathtub party of my old company (FINLE), the answer is no. The bathtub party must remain the stuff of lithography legend; Fractilia will create its own traditions.
I write my posts the morning after that day of the symposium. And today definitely feels like a “morning after”. Two days of late nights at the hospitality suits followed by far too little sleep are beginning to have their effects. Let’s see if adrenaline and desire can carry me through the rest of the week…
For those reading this blog who do not attend the SPIE Advanced Lithography Symposium, let me explain that there are seven conferences as a part of the symposium, and there are always at least five sessions happening in parallel (Wednesday morning will see all seven). There is almost always more than one paper at any given time that I want to see, but all of my attempts at quantum entanglement with a doppelgänger have led to decoherence. (Yes, that is the ultimate in bad nerd humor.) Be aware that my extremely limited sampling of the symposium does not begin to do it justice.
For me, the day started with ASML’s talk on their new NXE:3400 EUV scanner, soon to be released. As a bit of history, the NXE platform was introduced to us at this symposium in 2010. The NXE:3100 was a “pre-production” tool, described in this way: “With an NA of 0.25 and a productivity of 60wph this tool is targeted for EUV process implementation and early volume production at the 27nm node.” But the NXE:3300 was to be the true production tool, targeted at 125 wph and the 22nm node. As we all know, the 3300 missed its window for use in production, but the much improved NXE:3350 soon become the target production tool. Since there was an upgrade path from the NXE:3300 to the NXE:3350, there was still a chance for those first 3300s to be used in production. But after listening to Intel’s Monday talk, I am getting the impression that all the existing tools in the field are playing the original role of the original 3100. It is the NXE:3400 that is now the targeted tool for high volume manufacturing. It has many improvements (such as the Flex-illuminator and a membrane just above the wafer that blocks unwanted out-of-band radiation), with throughput again targeted at 125 wph.
A quick word about throughput. Since throughput is a function of the dose used to expose the resist, and this dose is decided by the customer, ASML must make some assumption about the dose in order to specify the throughput of their tool. In the very early days of EUV development (15 years ago), many people hoped for a 5 mJ/cm2 sizing dose. That dream quickly relaxed to the more realistic (but still unrealistic) 10 mJ/cm2. The throughput specs for the NXE:3100 were based on this assumed dose. But since pattern quality improves with higher dose, the production spec of 125 wph for the NXE:3300 was based on a dose of 15 mJ/cm2. Since then, the unforgiving onslaught of stochastic randomness brought a concession by ASML to a dose of 20 mJ/cm2. This is now the assumption used to predict a 125 wph throughput for the NXE:3400. This dose is also a function of the mask level being printed, with contact holes, vias, and cut masks requiring more dose (maybe twice as much, possibly more). Since I don’t think that a dose of 20 mJ/cm2 is remotely possible due to roughness effects, significant downward scaling of the true throughput from the specified value is inevitable.
I enjoyed Tim Brunner’s paper on how to intelligently determine roughness specifications (but as a co-author, I am certainly biased). The old ITRS specifications for linewidth roughness, useful in their day, and now rightly ignored as both irrelevant and unachievable. Tim’s results, though, are scary.
I know that I exhibit selection bias, since I seek out the papers that deal with roughness and stochastic effects, but is seems that stochastics are everywhere at the symposium this year. From linewidth control specifications to edge placement error, stochastic effects are almost never ignored anymore and often are admitted to be the dominant source of error in the lithography process. After years of complaining that roughness was not getting the attention it deserved, that no longer seems to be a problem.
At the resist conference (Advances in Patterning Materials), the theme was often better roughness through chemistry. Or if we don’t have the chemistry ready, it is often better roughness through cartoons of the chemistry. Let me explain a test that I use when examining proposed solutions to stochastic-induced roughness: If I don’t understand how it works, I don’t believe it. Granted, this convolves skepticism with my own quite considerable ignorance, so I have to continually try to find my own errors in thinking and be open to being convinced. Some ideas that fall into the “don’t understand, so don’t believe” category include PSCAR and second-order deprotection kinetics. I hope to be convinced (preferably with good LER data).
We are half way through the technical conferences. I have two more papers to give, and many more to listen to.
The first day of the symposium began with the awards. I was very happy to see a great group of new SPIE fellows from our community: Emily Gallagher of Imec, Yuri Granik of Mentor Graphics, Qinghuang Lin of IMB, David Pan of the University of Texas at Austin, Mark Phillips of Intel, and James Thackeray of Dow. Congratulations to each of you for this well-deserved recognition. Donis Flagello, CEO of Nikon Research Corporation of America, won this year’s Frits Zernike award (full disclosure, I nominated him). For a history of the Zernike award, see this brief article.
For a change, I enjoyed all three plenary speakers. Usually, at least one is a dud, but not this year. I have to admit that I didn’t care for JSR CEO Nobu Koshiba’s disciple-like references to Ray Kurzweil and his singularity predictions (I’m not a Kurzweil fan), but it was just one part of his overall optimism for Moore’s Law. I don’t agree that Moore’s Law will continue to the 2-nm node, but I guess it’s important that sufficient optimism exists, otherwise we’ll never try. And we should try.
The first two talks of the EUV session were keynote addresses. Britt Turkot of Intel painted a fairly rosy picture of the progress of EUVL towards manufacturing readiness. “It’s been a long and winding road,” and we still have a ways to go, but the eight NXE:3300s and six NXE:3350s in the field are giving semiconductor manufacturers opportunities to shake out enough of the reliability problems to enable process learning. Tool availability continues to creep up (past the 70% mark), and mask making has progressed to the point where Intel has made “multiple” defect-free EUV masks. Intel showed data on “adders” (defects that get added to the mask during use) and reiterated their message from last year that that production without a pellicle is not an option. Thus, it makes sense that she listed the availability of a manufacturing-capable pellicle as the biggest risk.
She also mentioned stochastics, saying that “CD and edge placement variability is a deal breaker.” But then her conclusion slide said that resist performance won’t gate the introduction of EUV. I didn’t know what to make of these mixed messages, especially when she explained that the target dose for EUV manufacturing was 20 mJ/cm2. At that dose, there will be plenty of CD and edge placement variability.
Seong-Sue Kim of Samsung was similarly encouraged by EUVL improvement. He expressed amazement at the progress in mask blank defectivity saying it had reached the benchmark of 5 defects per blank that he thinks can enable manufacturing. He also said that the mask blistering problems he mentioned last year have largely been solved. For resists, he thinks that current performance is good enough for 7nm development, but sensitivity (at low roughness) needs to be improved for production. Of course, everyone agrees with that statement. The question is how to do it.
My favorite technical talk was Bill Hinsberg’s modeling of metal-oxide resists – a much needed start. John Biafore gave a great paper modeling millions of contact holes at various EUV conditions and looking for stochastic-related failures. He expressed skepticism at any possible breaking of the RLS trade-off (“resolution, LER, sensitivity – pick two”).
Finally, I was extremely gratified by the reception I received to my tutorial talk and was grateful for the many people willing to stay till 6:30pm to hear me speak. Thanks to Eric Panning and Ken Goldberg and the EUV Lithography conference for giving me such a great opportunity to talk about stochastic-induced roughness.
Sunday was a beautiful day in San Jose, bright and sunny. Just a few blocks away, though, last week’s flooding has devastated whole neighborhoods, causing possibly billions of dollars in damage and the evacuation of more than 10,000 people. Though very close, that disaster seems far away as we begin the SPIE Advanced Lithography Symposium and shift our focus from what nature can do to us to what we can do to harness nature.
Attendance is again at about 2,200 people this year, similar to what it has been for the last eight years. It’s hard to get a full sense of what this week will teach us, but just a cursory glance at the program reveals same major shifts in emphasis in the lithography world. The Advanced Lithography Technologies conference, now renamed Emerging Pattern Technologies, has shrunk considerably over the last two years, from 71 orals and 27 posters in 2015, to 27 orals and 14 posters this year. There are far fewer papers on DSA (directed self-assembly) this year, as well as fewer multibeam e-beam lithography and nanoimprint lithography papers. DSA’s early promise of “resolution in a bottle” has given way to the hard reality of defectivity for a thermodynamically driven system. Meanwhile, the EUV community is emphasizing their progress towards manufacturing readiness. Some hard realities await them as well, though, and talks on line-edge roughness seem to be everywhere.
This gives me the opportunity to advertise my first talk, at 5:40pm on Monday, at the end of the first day of the EUV conference. I have been invited to give a 40 minute tutorial talk on stochastic-induced roughness. I believe this is the first time that we’ve had a tutorial talk at the Advanced Lithography symposium, and I am very excited to be giving it. I hope everyone interested in line-edge roughness will endure the late hour and come and listen.
For those who are interested in the talk but can’t be at the Symposium, I’m excited about SPIE’s new program to capture each presentation on video. SPIE will be filming the slides and recording the audio for each talk. For presenters who have given SPIE permission, these talks will then be posted on the SPIE Digital Library as a permanent record of the presentation. So, if you do miss my tutorial talk, look for it to show up in a few weeks on the Digital Library.
Let the Symposium begin!
The week before the annual SPIE Advanced Lithography Symposium is always a busy one for me, but this year it is particularly so. It’s not just because I am giving a short course and three conference presentations. And it’s not because I am coauthor on four other talks (that’s a total of seven papers – yikes!). No, the real reason I am way too busy this week is that yesterday I launched my new company – Fractilia.
Seventeen years ago I sold my lithography simulation company FINLE Technologies, and after five years at KLA-Tencor I settled into the life of the “Gentleman Scientist”. My goal was to contribute to the science and practice of lithography through my research, teaching, and writing, all the while looking into the problems that I thought were the most interesting. For the last 10 years that “most interesting problem” has been stochastic-induced roughness. It is an incredibly interesting, fun, and important topic, and I have written 25 papers since 2009 that I hope have contributed something to our community’s understanding of this vexing problem. My goal has been to help transform our understanding of stochastics and roughness, so that we can better tackle the problem of reducing it.
Recently, though, I’ve come to understand that the best way for me to realize my vision of making a positive impact on the industry is to commercialize my ideas in software. So I’ve teamed up with my old partner from the FINLE days, Ed Charrier, to start a new company (Fractilia) and to introduce a new product (MetroLER).
The goal of Fractilia is to bring rigor, accuracy, and ease-of-use to the analysis of stochastic-induced roughness in semiconductor manufacturing and process development. Fractilia will deliver something I think is currently lacking in the industry: accurate and repeatable analysis of SEM images to extract the true, unbiased roughness behavior of wafer features. I think the industry needs this product. Of course, the market will tell me if I am right.
So, as I have for the last several years, I’ll be giving papers next week on various ways in which the measurement of pattern roughness can go wrong. I’ll complain about errors in the SEM and how they hide the true roughness behavior on the wafer. I’ll moan about the statistical difficulties of sampling, aliasing, and biases in our measurements. But this year I’ll do more than complain – I’ll do something about it.
For the interested reader, here is a recent press article on the new company:
And here is the company website: www.fractilia.com
Now, it is back to writing papers. See you in San Jose!
Belated Season’s Greetings from the Macks.
I was at Semicon West yesterday, back again for the first time in 15 years. I have mixed feelings about it.
Semicon West, held each year in San Francisco, is the biggest of the Semicon trade shows, the main source of revenue for the semiconductor equipment and materials supplier group SEMI (http://www.semi.org/). I remember well my first visit to a SEMI show, Semicon East in Boston about 1985. That was when the 128 corridor of Boston was thought to rival Silicon Valley (a vain hope at best) and the growing semiconductor industry was still young. I was young too, and inexperienced, and the Semicon show opened up a world of information and opportunity for me. I had much to learn. I also remember exhibiting at Semicon Southwest in Dallas in 1990, a small booth for my even smaller software startup FINLE Technologies. Through the 1990s I attended Semicon Japan many times, but managed to avoid going to Semicon West (a privilege of being the boss – I sent someone else).
Over time the Semicon shows grew in size and simultaneously became less important. At its peak (about 2000), the Semicon West show drew 60,000 people. But even then the relevance of this kind of trade show was declining. We longer need to roam the aisles of a massive exhibit floor to find out about suppliers and what they have to offer. We do that with Google now. The Semicon East and Southwest shows faded away, leaving only West and its foreign counterparts.
Around this time I finally started attending Semicon West – I now had a boss after selling my company to KLA-Tencor, so it was my turn to go. “Booth duty” was a dirty word at KLA-Tencor, and I presume at most other companies as well. The only people that came by the booth were competitors, people looking for jobs, and the curious neighboring exhibitors. Customer meetings were the only reason most of us came, and those took place off the floor.
And then it happened. My memory is a bit vague, but I think the year was 2001 or 2002 and I think the company was Novellus. They had a contract for a giant amount of space on the Semicon floor, but they didn’t installing a massive booth with mock-ups of their equipment. They didn’t send a small army of marketing managers and temp employees (known as “booth babes” in those politically incorrect days). Instead, Novellus installed a skeletal structure (it looked a bit like a cage) and hung gauzy cloth from the beams. They installed some monitors that looped marketing presentations. And they left it completely empty. Not an employee showed up, and the scene was ghostly. The message was clear – the trade show was no longer relevant.
Since then, most of the other big suppliers have left as well (Applied Materials, ASML, Lam, KLA-Tencor). Many of them established off-site events like breakfast forums and technical programs. The attendance at Semicon West is still large, but only half of its peak. It’s a trade show for the second tier of semiconductor equipment manufacturers, as well as for the very large number of small suppliers to the suppliers. SEMI has responded by adding more and more technical programs of their own, and expanding into solar and other related fields.
All the while I avoided coming here (after I returned to my boss-less lifestyle in 2005). SEMI invited me many times to participate, but I always declined. Finally, I decided it was time to give the show another chance, and I agreed to moderate Tuesday’s technical session on lithography. How bad could it be?
Well, it can’t be very bad when you have a good group of speakers. Lucian Shifren of ARM reminded us that scaling isn’t just about lithography, it impacts the device and the design as well. He asked what should be an obvious question: “Because you can make something smaller, should you make it smaller?” From a lithography perspective, we shrink to get an area benefit. But we never quite get all the area benefit that we expect. A 0.7X shrink should give us a 0.5X area reduction, but it rarely does. Going to restricted design rules causes the area to grow, as does the increase in parasitics and variability that come with shrinking. If we do go to EUV, stochastic variability will consume even more of the shrink. While the cost of designing a chip at each new node dramatically increases ($150M for a 10-nm design), the benefits that come from the new node go down. Shifren predicted that only 5 companies will design chips at the 10-nm node. Is 28-nm the last good node?
After the ARM talk, we had four of the more traditional supplier talks. Nikon was represented by Steve Renwick, who described a future of “all of the above lithography”, meaning that we will no longer have one lithography approach that everyone uses for every type of product. 193 immersion will not go away, but it may be supplemented by other approaches such as EUV or DSA. Ben Rathsack of Tokyo Electron America reiterated that point. What I found most interesting from his talk was the brief mention of using spacers in a multiple patterning process to create a kind of self-aligned via with significantly improved tolerance to overlay errors. I think such kinds of innovative ideas are going to be required in a world where variation is a much bigger percentage of the mean.
Mike Lercel gave the ASML talk, where of course everyone was interested in hearing an update on EUV progress. He said that multiple 125W sources were currently being installed and tested at customer sites. It is too early to have any availability data on these sources, and experience suggests that availability will ramp slowly. But that means that 2016 really will be the year when we have “100W by the end of the year”, a prediction first made by Cymer and ASML for 2007 (http://life.lithoguru.com/?p=409). Chris Lyons of JSR focused on resists for EUV, where he claimed that resolution is not a problem, but we still have a ways to go on the dose/LER trade-off. Finally, Harry Levinson of GlobalFoundries talked about the readiness of EUV. He described 2015 as a breakthrough year since, for the first time, a fab could print enough EUV wafers to start process learning. He suggested that “EUV deserves serious consideration for the 7-nm node.” Interestingly, he showed a chart of throughput versus EUV source power that had the throughput lower by about a factor of two compared to what ASML typically shows. Throughput calculations require many assumptions that mostly remain unstated in these kinds of presentations. Obviously, ASML’s assumptions are much more optimistic that GlobalFoundries’. I think I trust GlobalFoundries’ assumptions more.
So, in all, the technical talks were good, and I am glad that I attended. Still, I don’t think Semicon West is for me. I have no desire to go to the exhibit floor, and I’d rather meet up with lithography colleagues (including sales and marketing folks) at a technical meeting rather than a trade show. Obviously 30,000 people think the show adds some value to them, it just doesn’t for me.