‘The Laziest Person at Tesla’

Prediksi gede Result SGP 2020 – 2021. Cashback mingguan yang lain-lain tampil dilihat dengan terprogram melewati poster yang kami letakkan pada situs ini, serta juga bisa dichat kepada operator LiveChat support kami yg stanby 24 jam On the internet guna melayani semua maksud antara bettor. Ayo buruan gabung, serta dapatkan diskon Lotto & Live Casino On the internet terbaik yg hadir di laman kita.

I’ve spoken about Jim Keller many times on AnandTech. In the world of semiconductor design, his name draws attention, simply by the number of large successful projects he has worked on, or led, that have created billions of dollars of revenue for those respective companies. His career spans DEC, AMD, SiByte, Broadcom, PA Semi, Apple, AMD (again), Tesla, Intel, and now he is at Tenstorrent as CTO, developing the next generation of scalable AI hardware. Jim’s work ethic has often been described as ‘enjoying a challenge’, and over the years when I’ve spoken to him, he always wants to make sure that what he is doing is both that challenge, but also important for who he is working for. More recently that means working on the most exciting semiconductor direction of the day, either high-performance compute, self-driving, or AI. 



Jim Keller

CTO Tenstorrent

Ian Cutress

AnandTech

I have recently interviewed Tenstorrent’s CEO, Ljubisa Bajic, alongside Jim discussing the next generation of AI semiconductors. Today we’re publishing a transcript of a recent chat with Jim, now five months into his role at Tenstorrent, but moreso to talk about Jim the person, rather than simply Jim the engineer.













Jim Keller: Work Experience
AnandTech Company Title Important

Product
1980s 1998 DEC Architect Alpha
1998 1999 AMD Lead Architect K7, K8v1

HyperTransport
1999 2000 SiByte Chief Architect MIPS Networking
2000 2004 Broadcom Chief Architect MIPS Networking
2004 2008 P.A. Semi VP Engineering Low Power Mobile
2008 2012 Apple VP Engineering A4 / A5 Mobile
8/2012 9/2015 AMD Corp VP and

Chief Cores Architect
Skybridge / K12

(+ Zen)
1/2016 4/2018 Tesla VP Autopilot

Hardware Engineering
Fully Self-Driving

(FSD) Chip
4/2018 6/2020 Intel Senior VP

Silicon Engineering
?
2021 Tenstorrent President and CTO TBD

 

Topics Covered

  • AMD, Zen, and Project Skybridge
  • Managing 10000 People at Intel
  • The Future with Tenstorrent
  • Engineers and People Skills
  • Arm vs x86 vs RISC-V
  • Living a Life of Abstraction
  • Thoughts on Moore’s Law
  • Engineering the Right Team
  • Idols, Maturity, and the Human Experience
  • Nature vs Nurture
  • Pushing Everyone To Be The Best
  • Security, Ethics, and Group Belief
  • Chips Made by AI, and Beyond Silicon

 

AMD, Zen, and Project Skybridge

Ian Cutress: Most of the audience questions are focused on your time at AMD, so let’s start there. You worked at AMD on Zen, and on the Skybridge platform – AMD is now gaining market share with the Zen product line, and you’re off on to bigger and better things. But there has been a lot of confusion as to your exact role at AMD during that project. Some people believe you were integral in nailing down Zen’s design, then Zen 2 and Zen 3 high-level microarchitecture. Others believe that you put the people in place, signed off at high level, and then went to focus on the Arm version of Skybridge, K12. Can you give us any clarity as to your role there, how deep you went with Zen versus K12, or your involvement in things like Infinity Fabric?

Jim Keller: Yeah, it was a complicated project, right? At AMD when I joined, they had Bulldozer and Jaguar, and they both had some charming features but they weren’t successful in the market. The roadmaps weren’t aggressive, they were falling behind Intel, and so that’s not a good thing to do if you’re already behind – you better be catching up, not falling behind. So I took the role, and I was president of the CPU team which I think when I joined was 500 people. Then over the next three years the SoC team, the Fabric team, and some IP teams joined my little gang. I think when I left, it was 2400 people I was told. So I was a VP with a staff. I had senior directors reporting to me, and the senior fellows, and my staff was 15 people. So I was hardly writing RTL!

That said we did a whole bunch of things. I’m a computer architect, I’m not really a manager. I wanted the management role, which was the biggest management role I’d had at the time. Up to that point I’d been the VP of a start-up, but that was 50 people, and we all got along – this was a fairly different play for me. I knew that the technical changes we had to make would involve getting people aligned to it. I didn’t want to be the architect on the side arguing with the VP about why somebody could or couldn’t do the job, or why this was the right or wrong decision. I spoke to Mark Papermaster, I told him my theory, and he said ‘okay, we’ll give it a try’, and it worked out pretty good.

With that I had direct authority as it were – but people don’t really do what they’re told to do, right? They do what they’re inspired to do. So you have to lay out a plan, and part of it was finding out who were the right people to do these different things, and sometimes somebody is really good, but people get very invested in what they did last time, or they believe things can’t be changed, and I would say my view was things were so bad that almost everything had to change. So I went in with that as a default. Does that make sense? Now, it wasn’t that we didn’t find a whole bunch of stuff that was good to use. But you had to prove that the old thing was good, as opposed to prove the new thing was good, so we changed that mindset.

Architecturally, I had a pretty good idea what I wanted to build and why. I found people inside the company, such as Mike Clark, Leslie Barnes, Jay Fleischman, and others. There are quite a few really great people that once we describe what we wanted to do, they were like, ‘yeah, we want to do that’. Architecturally, I had some input. There was often decisions and analysis, and people have different opinions, so I was fairly hands-on doing that. But I wasn’t doing block diagrams or writing RTL. We had multiple projects going on – there was Zen, there was the Arm cousin of that, the follow-on, and some new SoC methodology. But we did more than just CPU design – we did methodology design, IP refactoring, very large organizational changes. I was hands-on top to bottom with all that stuff, so it makes sense.

IC: A few people consider you ‘The Father of Zen’, do you think you’d scribe to that position? Or should that go to somebody else?

JK: Perhaps one of the uncles. There were a lot of really great people on Zen. There was a methodology team that was worldwide, the SoC team was partly in Austin and partly in India, the floating-point cache was done in Colorado, the core execution front end was in Austin, the Arm front end was in Sunnyvale, and we had good technical leaders. I was in daily communication for a while with Suzanne Plummer and Steve Hale, who kind of built the front end of the Zen core, and the Colorado team. It was really good people. Mike Clark’s a great architect, so we had a lot of fun, and success. Success has a lot of authors – failure has one. So that was a success. Then some teams stepped up – we moved Excavator to the Boston team, where they took over finishing the design and the physical stuff, Harry Fair and his guys did a great job on that. So there were some fairly stressful organizational changes that we did, going through that. The team all came together, so I think there was a lot of camaraderie in it. So I won’t claim to be the ‘father’ – I was brought in, you know, as the instigator and the chief nudge, but part architect part transformational leader. That was fun.

IC: Is everything that you worked on now out at AMD, or is there still, kind of roadmap stuff still to come out, do you think from the ideas that you helped propagate?

JK: So when you build a new computer, and Zen was a new computer, there was already work underway. You build in basically a roadmap, so I was thinking about what we were going to do for five years, chip after chip. We did this at Apple too when we built the first big core at Apple – we built big bones [into the design]. When you make a computer faster, there’s two ways to do it – you make the fundamental structure bigger, or you tweak features, and Zen had a big structure. Then there were obvious things to do for several generations to follow. They’ve been following through on that.

So at some point, they will have to do another big rewrite and change. I don’t know if they started that yet. What we had planned for the architectural performance improvements were fairly large, over a couple of years, and they seem to be doing a great job of executing to that. But I’ve been out of there for a while – four or five years now.

IC: Yeah, I think they said that Zen 3, the last one that just came out was a rewrite. So I think some people are thinking that was still under your direction.

JK: Yeah, it’s hard to say. Even when we did Zen, we did a from-scratch design – a clean design at the top. But then when they built it, there was a whole bunch of pieces of RTL that came from Bulldozer, and Jaguar, which were perfectly good to use. They just had to be modified and built into the new Zen structure. So hardware guys are super good at using code when it’s good.

So when they say they did a big rewrite, they probably took some pieces and re-architected them at the top, but when they built the code, it wouldn’t surprise me if somewhere between 20% and 80% of the code was the same stuff, or mildly modified, but that’s pretty normal. The key is to get the structure right, and then reuse code as needed, as opposed to taking something that’s complicated and trying to tweak it to get somewhere. So if they did a rewrite, they probably fixed the structure.

 

Managing 10000 People at Intel

IC: I know it’s still kind of fresh, so I’m not sure what kind of NDAs you are still under, but your work at Intel – was that more of a clean slate? Can you go into any detail about what you did there?

JK: I can’t talk too much, obviously. The role I had was Senior Vice President of Silicon Engineering Group, and the team was 10,000 people. They’re doing so many different things, it’s just amazing. It was something like 60 or 70 SoCs is in flight at a time, literally from design to prototyping, debugging, and in production. So it was a fairly diverse group, and there my staff was vice presidents and senior fellows, so it was a big organizational thing.

I had thought I was going there because there was a bunch of new technology to go build. I spent most of my time working with the team about both organizational and methodology transformation, like new CAD tools, new methodologies, new ways to build chips. A couple of years before I joined, they started what’s called the SoC IP view of building chips, versus Intel’s historic monolithic view. That to be honest wasn’t going well, because they took the monolithic chips, they took the great client and server parts, and simply broke it into pieces. You can’t just break it into pieces – you have to actually rebuild those pieces and some of the methodology goes with it.

We found a bunch of people [internally] who were really excited about working on that, and I also spent a lot of time on IP quality, IP density, libraries, characterization, process technology. You name it, I was on it. My days were kind of wild – some days I’d have 14 different meanings in one day. It was just click, click, click, click, so many things going on. 

IC: All those meetings, how did you get anything done?

JK: I don’t get anything done technically! I got told I was the senior vice president – it’s evaluation, set direction, make judgment calls, or let’s say try some organizational change, or people change. That adds up after a while. Know that the key thing about getting somewhere is to know where you are going, and then put an organization in place that knows how to do that – that takes a lot of work. So I didn’t write much code, but I did send a lot of text messages.

IC: Now Intel has a new engineering-focused CEO in Pat Gelsinger. Would you ever consider going back if the right opportunity came up?

JK: I don’t know. I have a really fun job now, and in a really explosive growth market. So I wish him the best. I think it was a good choice [for Pat as CEO], and I hope it’s a good choice, but we’ll see what happens. He definitely cares a lot about Intel, and he’s had real success in the past. He’s definitely going to bring a lot more technical focus to the company. But I liked working with Bob Swan just fine, so we’ll see what happens.

 

The Future with Tenstorrent

IC: You are now several companies on from AMD, at a company called Tenstorrent, with an old friend in Ljubisa Bajic. You’ve been jumping from company to company to company for basically your whole career. You’re always finding another project, another opportunity, another angle. Not to be too blunt, but is Tenstorrent going to be a forever home?

JK: First, I was at Digital (DEC) for 15 years, right! Now that was a different career because I was in the mid-range group where we built computers out of ECL – these were refrigerator-sized boxes. I was in the DEC Alpha team where we built little microprocessors, little teeny things, which at the time we thought were huge. These were 300 square millimeters at 50 watts, which blew everybody’s mind.

So I was there for a while, and I went to AMD right during the internet rush, and we did a whole bunch of stuff in a couple of years. We started Opteron, HyperTransport, 2P servers – it was kind of a whirlwind of a place. But I got sucked up or caught up in the enthusiasm of the internet, and I went to SiByte, which got bought by Broadcom, and I was there for four years total. We delivered several generations of products.

I was then at P.A Semi, and we delivered a great product, but they didn’t really want to sell the product for some reason, or they thought they were going to sell it to Apple. I actually went to Apple, and then Apple bought P.A Semi, and then I worked for that team, so you know I was between P.A Semi and Apple. That was seven years, so I don’t really feel like that was jumping around too much.

Then I jumped to AMD I guess, and that was fun for a while. Then I went to Tesla where we delivered Hardware 3 (Tesla Autopilot).  So that was kind of phenomenal. From a standing start to driving a car in 18 months – I don’t think that’s ever been done before, and that product shipped really successfully. They built a million of them last year. Tesla and Intel were a different kind of a whirlwind, so you could say I jumped in and jumped out. I sure had a lot of fun.

So yeah, I’ve been around a little bit. I like to think I mostly get done what I set out to accomplish. My success right there is pretty high in terms of delivering products that have lasting value. I’m not the guy to tweak things in production – it’s either a clean piece of paper or a complete disaster. That seems to be the things I do best at. It’s good to know yourself – I’m not an operational manager. So Tenstorrent is more the clean piece of paper. The AI space is exploding. The company itself is already many years old, but we’re building a new generation of parts and going to market and starting to sell stuff. I’m CTO and president, have a big stake in the company, both financially and also a commitment to my friends there, so I plan on being here for a while.

IC: I think you said before that going beyond the sort of matrix, you end up with massive graph structures, especially for AI and ML, and the whole point about Tenstorrent, it’s a graph compiler and a graph compute engine, not just a simple matrix multiply.

JK: From old math, and I’m not a mathematician, so mathematicians are going to cringe a little bit, but there was scalar math, like A = B + C x D. When you had a small number of transistors, that’s the math you could do. Now we have more transistors you could say ‘I can do a vector of those’, like an equation properly in a step. Then we got more transistors, we could do a matrix multiply. Then as we got more transistors, you wanted to take those big operations and break them up, because if you make your matrix multiplier too big, the power of just getting across the unit is a waste of energy.

So you find you want to build this optimal size block that’s not too small, like a thread in a GPU, but it’s not too big, like covering the whole chip with one matrix multiplier. That would be a really dumb idea from a power perspective. So then you get this array of medium size processors, where medium is something like four TOPs. That is still hilarious to me, because I remember when that was a really big number. Once you break that up, now you have to take the big operations and map them to the array of processors and AI looks like a graph of very big operations. It’s still a graph, and then the big operations are factored down into smaller graphs. Now you have to lay that out on a chip with lots of processors, and have the data flow around it.

This is a very different kind of computing than running a vector or a matrix program. So we sometimes call it a scalar vector matrix. Raja used to call it spatial compute, which would probably be a better word.

IC: Alongside the Tensix cores, Tenstorrent is also adding in vector engines into your cores for the next generation? How does that fit in?

JK: Remember the general-purpose CPUs that have vector engines on them – it turns out that when you’re running AI programs, there is some general-purpose computing you just want to have. There are also some times in the graph where you want to run a C program on the result of an AI operation, and so having that compute be tightly coupled is nice. [By keeping] it on the same chip, the latency is super low, and the power to get back and forth is reasonable. So yeah, we’re working on an interesting roadmap for that. That’s a little computer architectural research area, like, what’s the right mix with accelerated computing and total purpose computing and how are people using it. Then how do you build it in a way programmers can actually use it? That’s the trick, which we’re working on.

 

Engineers and People Skills

IC: If I go through your career, you’ve gone between high-performance computing and low-powered efficient computing. Now you’re in the world of AI acceleration. Has it ever got boring?

JK: No, and it’s really weird! Well it’s changed, and it’s changed so much, but at some level it doesn’t change at all. Computers at the bottom, they just add ones and zeros together. It’s pretty easy. 011011100, it’s not that complicated.

But I worked on the VAX 8800 where we built it out of gate arrays that had 200 OR gates in each chip. Like 200, right? Now at Tenstorrent, our little computers, we call them Tensix cores, are four trillion operations per second per core, and there’s 100 of them in a chip. So the building block has shifted from 200 gates to four Tera Ops. That’s kind of a wild transformation.

Then the tools are way better than they used to be. What you can do now – you can’t build more complicated things unless the abstraction levels change and the tools change. There have been so many changes on that kind of stuff. When I was a kid, I used to think I had to do everything myself – and I worked like a maniac and coded all the time. Now I know how to work with people and organizations and listen. Stuff like that. People skills. I probably would have a pretty uneven scorecard on the people skills! I do have a few.

IC: Would you say that engineers need more people skills these days? Because everything is complex, everything has separate abstraction layers, and if you want to work between them you have to have the fundamentals down.

JK: Now here’s the fundamental truth, people aren’t getting any smarter. So people can’t continue to work across more and more things – that’s just dumb. But you do have to build tools and organizations that support people’s ability to do complicated things. The VAX 8800 team was 150 people. But the team that built the first or second processor at Apple, the first big custom core, was 150 people. Now, the CAD tools are unbelievably better, and we use 1000s of computers to do simulations, plus we have tools that could place and route 2 million gates versus 200. So something has changed radically, but the number of people an engineer might talk to in a given day didn’t change at all. If you have an engineer talk to more than five people a day, they’ll lose their mind. So, some things are really constant.

 

CPU Instruction Sets: Arm vs x86 vs RISC-V

IC: You’ve spoken about CPU instruction sets in the past, and one of the biggest requests for this interview I got was around your opinion about CPU instruction sets. Specifically questions came in about how we should deal with fundamental limits on them, how we pivot to better ones, and what your skin in the game is in terms of ARM versus x86 versus RISC V. I think at one point, you said most compute happens on a couple of dozen op-codes. Am I remembering that correctly?

JK: [Arguing about instruction sets] is a very sad story. It’s not even a couple of dozen [op-codes] – 80% of core execution is only six instructions – you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it. If you’re writing in Perl or something, maybe call and return are more important than compare and branch. But instruction sets only matter a little bit – you can lose 10%, or 20%, [of performance] because you’re missing instructions.

For a while we thought variable-length instructions were really hard to decode. But we keep figuring out how to do that. You basically predict where all the instructions are in tables, and once you have good predictors, you can predict that stuff well enough. So fixed-length instructions seem really nice when you’re building little baby computers, but if you’re building a really big computer, to predict or to figure out where all the instructions are, it isn’t dominating the die. So it doesn’t matter that much.

When RISC first came out, x86 was half microcode. So if you look at the die, half the chip is a ROM, or maybe a third or something. And the RISC guys could say that there is no ROM on a RISC chip, so we get more performance. But now the ROM is so small, you can’t find it. Actually, the adder is so small, you can hardly find it? What limits computer performance today is predictability, and the two big ones are instruction/branch predictability, and data locality.

Now the new predictors are really good at that. They’re big – two predictors are way bigger than the adder. That’s where you get into the CPU versus GPU (or AI engine) debate. The GPU guys will say ‘look there’s no branch predictor because we do everything in parallel’. So the chip has way more adders and subtractors, and that’s true if that’s the problem you have. But they’re crap at running C programs.

GPUs were built to run shader programs on pixels, so if you’re given 8 million pixels, and the big GPUs now have 6000 threads, you can cover all the pixels with each one of them running 1000 programs per frame. But it’s sort of like an army of ants carrying around grains of sand, whereas big AI computers, they have really big matrix multipliers. They like a much smaller number of threads that do a lot more math because the problem is inherently big. Whereas the shader problem was that the problems were inherently small because there are so many pixels.

There are genuinely three different kinds of computers: CPUs, GPUs, and AI. NVIDIA is kind of doing the ‘inbetweener’ thing where they’re using a GPU to run AI, and they’re trying to enhance it. Some of that is obviously working pretty well, and some of it is obviously fairly complicated. What’s interesting, and this happens a lot, is that general-purpose CPUs when they saw the vector performance of GPUs, added vector units. Sometimes that was great, because you only had a little bit of vector computing to do, but if you had a lot, a GPU might be a better solution.

IC: So going back to ISA question – many people were asking about what do you think about Arm versus x86? Which one has the legs, which one has the performance? Do you care much, if at all?

JK: I care a little. Here’s what happened – so when x86 first came out, it was super simple and clean, right? Then at the time, there were multiple 8-bit architectures: x86, the 6800, the 6502. I programmed probably all of them way back in the day. Then x86, oddly enough, was the open version. They licensed that to seven different companies. Then that gave people opportunity, but Intel surprisingly licensed it. Then they went to 16 bits and 32 bits, and then they added virtual memory, virtualization, security, then 64 bits and more features. So what happens to an architecture as you add stuff, you keep the old stuff so it’s compatible.

So when Arm first came out, it was a clean 32-bit computer. Compared to x86, it just looked way simpler and easier to build. Then they added a 16-bit mode and the IT (if then) instruction, which is awful. Then [they added] a weird floating-point vector extension set with overlays in a register file, and then 64-bit, which partly cleaned it up. There was some special stuff for security and booting, and so it has only got more complicated.

Now RISC-V shows up and it’s the shiny new cousin, right? Because there’s no legacy. It’s actually an open instruction set architecture, and people build it in universities where they don’t have time or interest to add too much junk, like some architectures have. So relatively speaking, just because of its pedigree, and age, it’s early in the life cycle of complexity. It’s a pretty good instruction set, they did a fine job. So if I was just going to say if I want to build a computer really fast today, and I want it to go fast, RISC-V is the easiest one to choose. It’s the simplest one, it has got all the right features, it has got the right top eight instructions that you actually need to optimize for, and it doesn’t have too much junk.

IC: So modern instruction sets have too much bloat, especially the old ones. Legacy baggage and such?

JK: Instructions that have been iterated on, and added to, have too much bloat. That’s what always happens. As you keep adding things, the engineers have the struggle. You can have this really good design, there are 10 features, and so you add some features to it. The features all make it better, but they also make it more complicated. As you go along, every new feature added gets harder to do, because the interaction for that feature, and everything else, gets terrible.

The marketing guys, and the old customers, will say ‘don’t delete anything’, but in the meantime they are all playing with the new fresh thing that only does 70% of what the old one does, but it does it way better because it doesn’t have all these problems. I’ve talked about diminishing return curves, and there’s a bunch of reasons for diminishing returns, but one of them is the complexity of the interactions of things. They slow you down to the point where something simpler that did less would actually be faster. That has happened many times, and it’s some result of complexity theory and you know, human nefariousness I think.

IC: So did you ever see a situation where x86 gets broken down and something just gets reinvented? Or will it just remain sort of legacy, and then just new things will pop up like RISC-V to kind of fill the void when needed?

JK: x86-64 was a fairly clean slate, but obviously it had to carry all the old baggage for this and that. They deprecated a lot of the old 16-bit modes. There’s a whole bunch of gunk that disappeared, and sometimes if you’re careful, you can say ‘I need to support this legacy, but it doesn’t have to be performant, and I can isolate it from the rest’. You either emulate it or support it.

We used to build computers such that you had a front end, a fetch, a dispatch, an execute, a load store, an L2 cache. If you looked at the boundaries between them, you’d see 100 wires doing random things that were dependent on exactly what cycle or what phase of the clock it was. Now these interfaces tend to look less like instruction boundaries – if I send an instruction from here to there, now I have a protocol. So the computer inside doesn’t look like a big mess of stuff connected together, it looks like eight computers hooked together that do different things. There’s a fetch computer and a dispatch computer, an execution computer, and a floating-point computer. If you do that properly, you can change the floating-point without touching anything else.

That’s less of an instruction set thing – it’s more ‘what was your design principle when you build it’, and then how did you do it. The thing is, if you get to a problem, you could say ‘if I could just have these five wires between these two boxes, I could get rid of this problem’. But every time you do that, every time you violate the abstraction layer, you’ve created a problem for future Jim. I’ve done that so many times, and like if you solve it properly, it would still be clean, but at some point if you hack it a little bit, then that kills you over time.

 

Living a Life of Abstraction

IC: I’ve seen a number of talks where you speak about the concept of abstraction layers in not only a lot of aspects of engineering, but also life as well. This concept that you can independently upgrade different layers without affecting those above and below, and providing new platforms to build upon. At what point in your life did that kind of ethos click, and what happened in your life to make it that a pervasive element of your personality?

JK: Pervasive element of my personality? That’s pretty funny! I know I repeat it a lot, maybe I’m trying to convince myself.

Like, when we built EV 6, Dirk Meyer was the other architect. We had a couple other strong people. We divided the design into a few pieces, we wrote a very simple performance model, got it, but when we built the thing, it was a relatively short pipe for an out-of-order machine, because we were still a little weak on predictors. There were a lot of interactions between things, and it was a difficult design we built. We also built it with the custom design methodology Digital had at the time. So we had 22 different flip-flops, and people could/would roll their own flip flop. We frequently built large structures out of transistors.  I remember somebody asked me what elements were in our library, and I said, both of them! N-devices and P-devices, right? Then I went to AMD, and K7 was built with a cell library.

Now, the engineers there were really good at laying down the cell libraries in a way they got good performance. They only had two flip flops – a big one and a little one, and they had a clean cell library. They had an abstraction layer between the transistors and the designers. This was before the age of really good place-and-route tools, and that was way better.

Then on the interface that we built on EV6, which was later called the S2K bus, we listened to AMD. We originally had a lot of complicated transactions to do snoops, and loads, and stores, and reads, and writes, and all kinds of stuff. A friend of mine, who was at Digital Research Lab, I explained how it worked to him one day – he listened to me and he just shook his head. He said ‘Jim, that’s not the way you do this’. He explained how virtual channels worked, and how you could have separate abstract channels of information. You get that right before you start encoding commands. As a result of that educational seminar/ass-kicking, was HyperTransport. It has a lot of the S2K protocol, but it was built in a much more abstract way. So I would say that my move from AMD, from Digital to AMD, was where we had the ideas of how to build high-performance computing, but the methodologies were integrated, so from transistor up to architecture it couldn’t be the same person.

At AMD, there’s Mike Clark, the architects, the microarchitects, and the RTL people who write Verilog, but they literally translated to the gate libraries, to the gate people, and it was much more of a layered approach. K7 was quite a fast processor, and our first swing at K8, we kind of went backwards. My favorite circuit partner at the time – he and I could talk about big designs, and we saw this as transistors, but that’s a complicated way to build computers. Since then, I’ve been more convinced that the abstraction layers were right. You don’t overstep human capability – that’s the biggest problem. If you want to build something bigger and more complicated, you better solve the abstraction layers, because people aren’t getting smarter. If you put more than 100 people on it, it’ll slow down, not speed up, and so you have to solve that problem.

IC: If you have more than 100 people, you need to split into two abstraction layers?

JK: Exactly. There are reasons for that, like human beings are really good at tracking. Your inner circle of friends is like 10-20 people, it’s like a close family, and then there is this kind of 50 to 100 depending on how it’s organized, that you can keep track of. But above that, you read everybody outside your group of 100 people as semi-strangers. So you have to have some different contracts about how you do it. Like when we built Zen, we had 200 people, and half the team at the front end and half the team at the back end. The interface between them was defined, and they didn’t really have to talk to each other about the details behind the contract. That was important. Now they got along pretty good and they worked together, but they didn’t constantly have to go back and forth across that boundary.

 

Thoughts on Moore’s Law

IC: You’ve said on stage, and in interviews in the past, that you’re not worried about Moore’s Law. You’re not worried on the process node side, about the evolution of semiconductors, and it will eventually get worked out by someone, somewhere. Would you say your attitude towards Moore’s law is apathetic?

JK: I’m super proactive. That’s not apathetic at all. Like, I know a lot of details about it. People conflate a few things, like when Intel’s 10-nanometer slipped. People said that Moore’s law is dead, but TSMC’s roadmap didn’t slip at all.

Some of that is because TSMC’s roadmap aligned to the EUV machine availability. So when they went from 16nm, to 10nm, to 7nm, they did something that TSMC has been really good at – doing these half steps. So they did 7nm without EUV, and that 7nm with EUV, then 5nm without, and 5+nm with EUV, and they tweaked stuff. Then with the EUV machines, for a while people weren’t sure if they’re going to work. But now ASML’s market cap is twice that of Intel’s (it’s actually about even now, on 21st June). 

Then there’s a funny thing – I realized that at the locus of innovation, we tend to think of TSMC, Samsung, and Intel as the process leaders. But a lot of the leadership is actually in the equipment manufacturers like ASML, and in materials. If you look at who is building the innovative stuff, and the EUV worldwide sales, the number is something like TSMC is going to buy like 150 EUV machines by 2023 or something like that. The numbers are phenomenal because even a few years ago not many people were even sure that EUV was going to work. But now there’s X-ray lithography coming up, and again, you can say it’s impossible, but bloody everything has been impossible! The fine print, this what Richard Feynman said – he’s kind of smart. He said ‘there’s lots of room at the bottom’, and I personally can count, and if you look at how many atoms are across transistors, there’s a lot. If you look at how many transistors you actually need to make a junction, without too many quantum effects, there are only 10. So there is room there.

There’s also this funny thing – there’s a belief system when everybody believes technology is moving at this pace and the whole world is oriented towards it. But technology isn’t one thing. There are people who figure out how to build transistors, like what the process designers do at like Intel, or TSMC, or Samsung. They use equipment which can do features, but then the features actually interact, and then there’s a really interesting trade-off between, like, how should this be deposited and etched, how tall should it be, how wide, in what space. They are the craftsman using the tools, so the tools have to be super sharp, and the craftsmen have to be super knowledgeable. That’s a complicated play. There’s lots of interaction and at some level, because the machines themselves are complicated, you have this little complexity combination where the machine manufacturers are doing different pieces, but they don’t always coordinate perfectly, or they coordinate through the machine integration guys who designed the process, and that’s complicated. It can slow things down. But it’s not due to physics fundamentals – we’re making good progress on physics fundamentals.

IC: In your scaled ML talk, the one that you have in Comic Sans, you had the printed X slide. About it you say that as time goes on the way you print the X, because of the laws of physics, there are still several more steps to go in EUV. Also High NA EUV is coming in a couple of years, but now you mention X-rays. What’s the timeline for that? It’s not even on my radar yet.

JK: Typically when a technology comes along, they use it for one thing. First, when EUV was first used in DRAMs, it was literally for one step, maybe two. So I’m trying to remember – perhaps 2023/2024? It’s not that far away. That means they’re already up and running, and people are playing with it. Then the wild thing is, when they went from optical light to EUV, it was about a 10x reduction in wavelength? So they while they had crazy multi-patterning and interference kind of stuff that you saw those pictures of DUV, when it came to EUV, they could just print direct. But actually [as you go smaller] they can use the same tricks on EUV. So EUV is going to multi-patterning, I think in 3nm. Then there are so many tricks you can do with that. So yeah, the physics is really interesting. Then along with the physics, the optics stuff, and then there’s the purity of the materials, which is super important, then temperature control, so things don’t move around too much. Everywhere you look there are interesting physics problems, and so there’s lots to do. There are hundreds of thousands of people working on it, and there’s more than enough innovation bandwidth.

 

Engineering the Right Team

IC: So pivoting to a popular question we’ve had. One of the things that we’ve noted you doing, as you go from company to company, is the topic of building a team. As teams are built by others, we’ve seen some people take engineers from a team they’ve built at previous companies to the next company. Have you ever got any insights into how you build your teams? Have there been any different approaches at the companies that you work for on this?

JK: The first thing you have to realize is if you are building the team, or finding one. So there’s a great museum in Venice, the David Museum, and the front of the museum, there’s these huge blocks of marble. 20 by 20 by 20. How they move them, I don’t know. The block of marble sitting there, and Michelangelo could see this beautiful sculpture in it. It was already there, right? The problem was removing the excess marble.

So if you go into companies with 1000 employees, I guarantee you, there’s a good team there. You don’t have to hire anybody. When I was at AMD, I hardly hired anybody. We moved people around, we re-deployed people [elsewhere], but there were plenty of great people there. When I went to Tesla, we had to build the team from scratch, because there was nobody at Tesla that was building chips. I hired people that I knew, but then we hired a bunch of people that I didn’t know at some point, and this is one of those interesting things.

I’ve seen leaders go from one company to another and they bring their 20 people, and then they start trying to reproduce what they had before. That’s a bad idea, because although 20 people is enough to reproduce [what you had], it alienates what you want [in that new team]. When you build a new team, ideally, you get people you really like, either you just met them, or you work with them, but you want some differences in approach and thinking because everybody gets into a local minimum. So the new team has this opportunity to make something new together. Some of that is because if you had ten really great teams all working really well, and then you made a new team with one person from each of those teams: that may well be better, because they will re-select which the best ideas were.

But every team has pluses and minuses, and so you have to think about if you’re building the team or finding a team, and then what’s the dynamic you’re trying to create that gives it space for people to have new ideas. Or, if some people get stuck on one idea, they then work with new people and they’ll start doing this incredible thing, and you think they’re great, even though they used to be not so great, so what happened? Well, they were carrying some idea around that wasn’t great, and then they met somebody who challenged them or the environment forced them, and all of a sudden they’re doing a great job. I’ve seen that happen so many times.

Ken Olson at Digital (DEC) said there are no bad employees, there are just bad employee job matches. When I was younger, I thought that was stupid. But as I’ve worked with more people, I’ve seen that happen so many bloody times that I’ve even fired people who went on to be really successful. All because they weren’t doing a good job and they were stuck, emotionally, and they felt committed to something that wasn’t working. The act of moving them to a different place freed them up. [Needless to say] I don’t get a thank you. (laughs)

IC: So how much of that also comes down to company culture? I mean, when you’re looking for the person for the right position, or whether you’re hiring in for the new position, do you try and get something that goes against the company grain? Or goes with the company grain? Do you have any tactics here or are you just looking for someone with spark?

JK: If you’re trying to do something really innovative, it’s probably mostly going against [the grain]. If you have a project that’s going really well, bringing in instigators is going to slow everybody down, because you’re already doing well. You have to read the group in the environment. Then there are some people who are really good, and they’re really flexible to go on this project, they fit in and just push, but on the next project, you can see they have been building their network and the team, and on the next project they’re ready to do a pivot and everybody’s willing to work. Trust is a funny thing, right? You know, if somebody walks up and says to jump off this bridge but you’ll be fine, you’re likely to call bullshit – but if you had already been through a whole bunch of stuff with them, and they said ‘look, trust me, then jump – you’re going to be fine; it’s going to suck, but it’s going to be fine’, you’ll do it, right?  Teams that trust each other are way more effective than ones that have to do everything with contracts, negotiation, and politics.

So that’s probably one thing – if you’re building or finding a team, and you start seeing people doing politics, which means manipulating the environment for their own benefit, they have got to go. Unless you’re the boss! Then you have got to see if they deliver. Some people are very political, but they really think their political strength comes from delivering. But people randomly in an organization that are political just cause lots of stress.

IC: Do you recommend that early or mid-career engineers should bounce around regularly from project to project, just so they don’t get stuck in a hole? It sounds like that’s a common thing.

JK: You learn fastest when you’re doing something new, and working for somebody that knows way more than you. So if you’re relatively early in your career and you’re not learning a lot or, you know, the people that you’re working for aren’t inspiring you, then yeah you should probably change. There are some careers where I’ve seen people bounce around three times because they’re getting experience and they end up being good at nothing. They would have been better staying where they were, and really getting deep at something. So you know, creative tension – there’s creative tension between those two ideas.

 

Idols, Maturity, and the Human Experience

IC: So that kind of leads into a good question, actually, because I wanted to ask about you and your mentors going through your early career. Who did you look up to for leadership or knowledge or skills? Is there anyone you idolize?

JK: Oh, yeah, lots of people. Well it started out with my parents. Like, I was really lucky. My father was an engineer, and my mom was super smart, kind of more verbally and linguistically. The weird thing was that when I grew up, I was sort of more like her, you know, thinking-wise, but I was dyslexic – I couldn’t read. My father was an engineer, so I grew up thinking I was like him, but I was actually intellectually more like my mother. They were both smart people. Now they came out of the 50s, and my mom raised family, so she didn’t start her career as a therapist until later in life. But they were pretty interesting people.

Then, when I first started at Digital, I worked for a guy named Bob Stewart, who was a great computer architect. He did the PDP-11/44, PDP-11/70, VAX 780, VAX 8800, and the CI interconnect. Somebody said that every project that he had ever worked on earned a billion dollars, back when that was a huge number. So I worked for him and he was great, but there were half a dozen other really great computer architects there. I was at DEC and DEC had DEC Research Labs, and I got to meet guys like Butler Lampson and Chuck Thacker and Neil Wilhelm. Nancy Kronenberg was one of my mentors when I was a little kid, and she’s one of the chief people on the VMS operating system. So that was kind of lucky.

So did I idolize them? Well, they were both daunting and not, because I was a little bit of a, you know. I didn’t quite realize who they were at the time. I was more a little oblivious to what was going on. Like, my first week at Digital, we got trained on this drawing system called Valid, which is kind of before the Matrox graphics era. So this guy walked in, and he was asking us questions and telling us about hierarchical design. I explained to him why that was partly good idea and partly stupid, and so we had an hour debate about it, then he walked off. Somebody said that was Gordon Bell. I asked ‘Who’s that? He’s the CTO of Digital? Really?  Well he’s wrong about half the stuff he just said – I hope I straightened him out.’ But you know, I think that’s just some serotonin activation or something. That’s more of a mental problem with me than a feature, I think!

IC: So would you say you’ve matured?

JK: Not a bit!

IC: Is that where the fun is?

JK: I mean, there’s a whole bunch of stuff. When I was young, it was like I get nervous when I give a talk, and I realized I had to understand the people around me better. But you know, I wasn’t always quite convinced. [At the time] I rather they just do the right thing or something. So there’s a bunch of stuff that has changed. Now I’m really interested in what people think and why they think it, and I have a lot of experience with that. Every once a while you can really help debug somebody, or get the group to work better. I don’t mind giving public talks at all. I just decided that the energy I got from being nervous was fun. I still remember walking out on stage at Intel at some conference, like 2000 people. I was like I should have been really nervous, but instead I was just really excited about it. So some of that kind of stuff changed, but that’s partly conscious, and partly just practice. I still get excited around like computer design and stuff. I had a friend of mine’s wife ask what they put in the water, because all we ever do is talk about computers. It’s really fun, you know. Changing the world. It’s great.

IC: It sounds like you have spent a lot more time, in a way, studying the human experience. If you understand how people think, how people operate, that’s different compared to mouthing at Gordon Bell for an hour.

JK: It’s funny. People occasionally ask me like, or I tell people, that I read books. You learn a lot from books. Books are fun by the way – if you know how a book works. Somebody who lives 20 years, then passionately writes their best ideas (and there are lots of those books), and then you go on Amazon and find the best ones. It’s hilarious, right? Like a really condensed experience in a book, written, and you can select the better books, like who knew, right? But I’ve been reading a lot of books for a long time.

It’s hard to say, ‘read these four books, it’ll change your life’. Sometimes a [single] book will change your life. But reading 1000 books will [certainly] change your life that’s for damn sure. There’s so much human experience that’s useful. Who knew Shakespeare would be really useful for engineering management, right? But like, what are all those stories – power politics, devious guys, the minions doing all the work and the occasional hero saving the day? How does that all play out? You’re always placed 500 years ago, but it applies to corporate America every single day of the week. So if you don’t know Shakespeare or Machiavelli, you don’t know nothing.

IC: I think I remember you saying that before you went into your big first management role, you read 20 books about management techniques, and how you ended up realizing that you’d read 19 more than anybody else.

JK: Yeah, pretty much. I actually contacted Venkat (Venkatesh) Rao, who’s famous for the Ribbonfarm blog and a few other things to figure [stuff] out. I really liked his thinking about organization from his blog, and he had a little thing at the bottom where it says to click here to buy him a cup of coffee, or get a consulting or a consult, so I sent him an email. So we started yakking, and we spent a lot of time talking before I joined AMD. He said I should read these books and I did. I thought everybody who’s in a big management job did that, but nobody does. You know it was hilarious – like 19 is generous. I read 20 more management books than most managers have ever read. Or they read some superficial thing like Good to Great, which has some nice stories in it, but it’s not that deep a book management-wise. You’d be better off reading Carl Jung than Good to Great if you want to understand management.

IC: Do you find yourself reading more fiction or nonfiction?

JK: As a kid, I read all the nonfiction books. Then my parents had a book club. I didn’t really learn to read until I was in fourth grade, but somewhere around seventh or eighth grade, I had read all the books in the house. They had John Updike, and John Barth was one of my favorite authors when I was a kid. So there were a whole bunch of stories. Then Doris Lessing. Doris Lessing wrote a series of science fiction books that were also psychological inquiries, and I read that, and I just, I couldn’t believe it. Every once a while stuff like that kind of blows your mind. And it happened, obviously, at the right time. But now I read all kinds of stuff. I like history and anthropology and psychology, and mysticism, and there are so many different things. I’ve probably read fewer fiction books in the last 10 years. But when I was younger, I read probably mostly fiction.

IC: I did get a few particular comments from the audience in advance of this interview about comments you made when you were being interviewed by Lex Fridman. You said that you read two books a week. You’re also very adept at quoting from key engineers and futurists. I’m sure if you started tweeting what book you’re reading when you start a new one, you’ll get a very large following. A sort of a passive Jim Keller book club!

JK: I would say I read two books a week. Now, I read a lot, but it tends to be blogs and all kinds of crazy stuff. I don’t know – like doing Lex [Lex’s Podcast] is super fun, but I don’t know that I have the attention span for social media to do anything like that. I’d forget about it for weeks at a time.

IC: How do you make sure that you’re absorbing what you’re reading, rather than having your brain diverting about some other problem that you might be worrying about?

JK: I don’t really care about that. I know people that read books, and they are really worried if they’re going to remember them. They spend all this time highlighting and analyzing. I read for interest, right? What I really remember is that people have to write 250-page books, because that’s like a publisher rule. It doesn’t matter if you have 50 pages of ideas, or 500, but you can tell pretty fast. I’ve read some really good books that are only 50 pages, because that’s all they had. You can also read 50 pages, and you think, ‘wow, it’s really great!’, but then the next 50 pages is the same shit. Then you realize it’s just been fleshed out – at that point I wish they just published a shorter book.

But that is what it is. But if the ideas are interesting, that’s good. I meditate regularly, and then I think about what I’m thinking about, which is sometimes related to what I’m reading. Then if it’s interesting, it gets incorporated. But your brain is this kind of weird thing – you don’t actually have access to all the ideas and thoughts and things you’ve read, but your personality seems to be well informed by it, and I trust that process. So I don’t worry if I can’t remember somebody’s name [in a book], because their idea may have changed, and who I was and I don’t remember what book it came from. I don’t care about that stuff.

IC: As long as you have passively absorb it at some level?

JK: Yeah. Well, there’s a combination of passive and active. I told Lex that a lot of times when I’m working on problems, I prep my dreams for it. It’s really useful. That’s a fairly straightforward thing to do. Before you fall asleep, you call up your mind, on what you’re really working on and thinking about. Then my personal experiences sometimes, I really do work on that, and sometimes that’s just a problem in the way of what I actually need to think about, and I’ll dream about something else. I’ll wake up well, and one way or the other it was really interesting.

 

Nature vs Nurture

IC: So on the topic of time, here we are discussing personal health, study, meditation, and family, but also how you execute professionally. Are you one of these people who only needs four hours of sleep a night?

JK: Nah, I need like seven. Well, I added it up one day that my ideal day would have like 34 hours in it. Because I like to work out, spend time with my kids, I like to sleep and eat, and you know I like to work. I like to read too, so I don’t know. Work is the weird one, because that can fill in lots more time than you want to spend on it. But I also really like working, so it’s a challenge to kind of stamp it down.

IC: When there’s a deadline, what gets pushed out the way first? You’ve worked at companies where getting the product out, and time to market, has been a key element of what you’re doing.

JK: For about the last six years, the key thing for me is that once I have too much to do, I find somebody that wants to do it more than me. I mostly work on unsolved problems. You know I was the laziest person at Tesla. Tesla had a culture of working 12 hours a day to make it look like you’re working, and I worked, you know, 9 to 7, which was a lot of hours. But I also went running at lunch, and a workout. They had a weightlifting room. Deer Creek was right next to the big machine shop, so I would go down there for an hour to work out and to eat.

At AMD and Intel, they’re big, big organizations, and I had a really good staff. So I’d find myself spending way too much time on presentations, or working on some particular thing. Then I’d find some people who wanted to work on it, so I’d give it to them and, you know, go on vacation.

IC: Or speaking to press people like me, and taking up your time! What is your feeling about doing these sorts of press interviews, and you know, more the sort of marketing and corporate and discussion? These aren’t really necessarily related to actually pushing the envelope, it’s just talk.

JK: It’s not just talk. I’ve worked on some really interesting stuff, so I like to talk about it. When I was in Intel, I realized it was one of the ways to influence the Intel engineers. Like everybody thought Moore’s Law was dead, and I thought ‘holy crap, it’s the Moore’s Law company!’. It was really a drag if [as an engineer] your main thing was that [Moore’s Law is dead], because I thought it wasn’t. So I talked to various people, then they amplified what I said and debated it, and it went back inside. You know, I actually reached more people inside of Intel by doing external talks. So that was useful to me, because I had a mission to build faster computers. That’s what I like to do. So when I talked to people, they always bring all kinds of stuff up, like how the work we do impacts people. Guys like you, and think really hard about it, and you talk to each other. Then I talk to you, and you ask all these questions, and it’s kind of stimulating. It’s fun. If you can explain something really clearly, you probably know it. There are a lot of times you think you know it, and then you go to explain it, but you’re stumbling all around. I did some public talks where they were hard to do, like the talk actually seems simple, but to get to the simple part you have to get your ideas out and reorganize them and then throw out the BS. It’s a useful thing to talk.

IC: Is it Feynman or Sagan that said ‘if you can’t explain the concept to at first-year college level, then you don’t really understand it’?

JK: Yeah, that sounds probably like Feynman. He did that really well, like with his lecture series on physics. It was quite interesting. Feynman’s problem was that he had such a brilliant intuition for the math, that his idea of simple was often not that simple! Like he just saw it, and you could tell. Like he could calculate some orbital geometry in five ‘simple’ steps, and he was so excited about how simple it was. But I think he was the only person in the room that thought it was simple.

IC: I presume he had the ability to visualize things in his head and manipulate them. I remember you saying at one point, that when it comes down to circuit-level design, that’s the sort of thing you can do. 

JK: Yeah. If I had one superpower, I feel like I can visualize how a computer actually runs. So when I do performance modeling and stuff like that, I can see the whole thing in my head and I’m just writing the code down. It is a really useful skill, but you know I probably partly was born with it. Partly developed and partly something that came out of my late adult diagnosis of dyslexia.

IC: I was going to ask how much of that is nature versus nurture?

JK: It’s hard. There’s this funny thing that with super-smart people, often things are so easy for them, that they can go a really long way without having to work hard. So I’m not that smart. So persistence, and what they call grit, is super useful, especially in computer design. When lots of stuff takes a lot of tweaking, you have to believe you can get there. But a lot of times, there’s a whole bunch of subtle iterations to do, and practice with that actually really works. So yeah, everybody’s a combination. But if you don’t have any talent, it’s pretty hard to get anywhere, but sometimes really talented people don’t learn how to work, so they get stuck with just doing the things that are obvious, not the things that take that persistence through the mess.

IC: Also identifying that talent is critical as well, especially if you don’t know you have it?

JK: Yeah, but on the flip side, you may have enough talent, but you just haven’t worked hard, and some people give up too soon. You’ve got to do something, something you’re really interested in. When people are struggling, like if they want to be an engineer or in marketing or this or that, [ask yourself] what do you like? This is especially true for people who want to be engineers, but their parents or somebody wants me to be a manager. You’re going to have a tough life, because you’re not chasing your dream, you’re chasing somebody else’s. The odds that you will be excited about somebody else’s dream are low. So if you’re not excited, you’re not going to put the energy in. or learn. That’s a tough loop in the end.

 

Pushing Everyone To Be The Best

IC: To what extent do you spend your time mentoring others, either inside organizations, or externally with previous coworkers or students? Do you ever envision yourself doing something on a more serious basis, like the ‘Jim Keller School of Semiconductor Design’?

JK: Nah. So it’s funny because I’m mostly mission driven. Like, ‘we’re going to build Zen!’, or ‘we’re going to build Autopilot!’, and then there are people that work for me. Then as soon as they start working for me, I start figuring out who they are, and then some of them are fine, and some of them have big problems that need to be, let’s say, dealt with one way or the other. So then I’ll tell them what I want, sometimes I’ll give them some pointed advice. Sometimes I’ll do stuff, and you can tell some people are really good at learning by following. Then people later on are telling me that I was mentoring them, but I’m thinking that I thought I was kicking your ass? It’s a funny experience.

There are quite a few people that said I impacted their life in some way, but for some of those, I went after them about their health or diet, because I thought they looked not energized by life. You can make really big improvements there. It’s worth doing by the way. It was either that, or they were doing the wrong thing, and they were just not excited about it. [At that point] you can tell they should be doing something else. So they either have to figure out why they are not excited or get excited, and then a lot of people start fussing with themselves or with other people about their status or something. The best way to have status is to do something great, and then everybody thinks you’re great. Having status by trying to claw your way up is terrible, because everybody thinks you’re a climber, and sometimes they don’t have the competence or skill to make the right choice there. It mostly comes out of being mission driven.

I do care about people, at least I try to, and then I see the results. I mean, it’s really gratifying to get a big complicated project done. You know where it was when you started, and then you know where it was when it was done, and then people when they work on successful things associate the leadership and the team they’re working with as being part of that. So that’s really great, but it doesn’t always happen. I have a hard time doing quote ‘mentoring people’, because what’s the mission? Like, somebody comes to you and says ‘I want to get better’. Well, better at what? Then if that’s like wanting to be better at you playing violin, well I’m not good at that.

Whereas when I say ‘hey, we’re going to build the world’s fastest autopilot chip’, then everybody working on it needs to get better at doing that. It turns out three-quarters of their problems are actually personal, not technical. So to get the autopilot chip, you have to go debug all that stuff, and there are all kinds of personal problems – health problems, parental childhood problems, partner problems, workplace problems, and career stall problems. The list is so bloody long, and we take them all seriously.  As it turns out, everybody thinks their own problems are really important, right? You may not think their problems are important, but I tell you, they do, and they have a list. Ask anybody – what are your top five problems. They can probably tell you.  Or even weirder, they give you the wrong five, because that happens too.

IC: But did they give you the five they think you want to hear rather than the actual five?

JK: Yeah. People also have no-fly zones, so their biggest problem may be something they don’t want to talk about. But if you help them solve that, then the project will go better, and then at some point, they’ll appreciate you. Then they’ll say you’re a mentor, and you’re thinking, kinda, I don’t know.

IC: So you mentioned about your project succeeding, and you know, people being proud of their products. Do you have a ‘proudest moment’ of your career, project, or accolade? Any specific moments in time?

JK: I have, and there’s a whole bunch of them. I worked with Becky Loop at Intel, and we were debugging some quality things. It turns out there was a whole bunch of layers of stuff. We were going back and forth on how to analyze it, how to present it, and I was frustrated with the data and what was going on. One day she came up with this picture, and it was just perfect. I was really excited for her because she’d gotten to the bottom of it. We actually saw a line of sight to fix and stuff. But that kind of stuff happens a lot. 

IC: An epiphany?

JK: Yeah. Well sometimes working with a group of people, going into it is like a mess, but then it gets better. The Tesla Autopilot thing was wild, and Zen’s success has been fantastic. Everybody thought that the AMD team couldn’t shoot straight, and I was very intrigued with the possibility of building a really great computer with the team that everybody thought was out of it. Like nobody thought AMD had a great CPU design team. But you know, the people who built Zen, they had 25 to 30 years work history at AMD. That was insane.

IC: I mean Mike Clark and Leslie Barnes, they’ve been there for 25 to 30 years.

JK: Steve Hale, Suzanne Plummer.

IC: The Lifers?

JK: Yeah, they’re kind of lifers, but they had done many great projects there. They all had good track records. But what did we do different? We set some really clear goals, and then we reorganized to hit the goals. We did some really thorough talent analysis of where we were, and there were a couple people that had really checked out because they were frustrated that they could never do the right thing. You know I listened to them – whoa Jesus, I love to listen to people.

We had this really fun meeting, and it was one of the best experiences of my life. Suzanne called me up and said that people on the Zen team don’t believe they can do it. I said, ‘great – I’ll drive to the airport, I’m in California, and I’ll see you there tomorrow morning, eight o’clock. Make sure you have a big room with lots of whiteboards’. It was like 30 angry people ready to tell me all the reasons why it wouldn’t work. So I just wrote all of the reasons down on a whiteboard, and we spent two days solving them. It was wild because it started with me defending against the gang, but people started to jump in. I was like, whenever possible, when somebody would say ‘I know how we fix that’, I would give them the pen and they would get up on the board and explain it. It worked out really good. The thing was, the honesty of what they did, was great. Here are all the problems that we don’t know how to solve, and so we’re putting them on the table. They didn’t give you 2 reasons but hold back 10 and say ‘you solve those two’. There was none of that kind of bullshit kind of stuff. They were serious people that had real problems, and they’d been through projects where people said they could solve these problems, and they couldn’t. So they were probably calling me out, but like I’m just not a bullshitter. I’m not a bullshitter, but I told them how some we can do, some I don’t know. But I remember, Mike Clark was there and he said we could solve all these problems. You know I walked out when our thing is pretty good, and people walked out of the room feeling okay, but two days later problems all pop back up. So you know, like how often do you have to go convince somebody? But that’s why they got through it. It wasn’t just me hectoring them from the sidelines, there were lots of people and lots of parts of the team that really said, they’re willing to really put some energy into this, which is great.

IC: At some point I’d love to interview some of them, but AMD keeps them under lock and key from the likes of us.

JK: That’s probably smart!

IC: Is there somebody in your career that you consider like a silent hero, that hasn’t got enough credit for the work that they’ve done?

JK: A person?

IC: Yeah.

JK: Most engineers. There are so many of them, it’s unbelievable. You know engineers, they don’t really get it. Compared to lawyers that are making 800 bucks an hour in Silicon Valley, engineers so often want to be left alone and do their work and crank out stuff. There are so many of those people that are just bloody great. I’ve talked to people who say stuff like ‘this is my eighth-generation memory controller’, and they’re just proud as hell because it works and there are no bugs in it, and the RTL is clean, and the commits are perfect. Engineers like that are all over the place, I really like that scenario.

IC: But they don’t self-promote, or the company doesn’t?

JK: Engineers are more introverted, and conscientious. The introverted tend not to be the people who self-promote.

IC: But aren’t you a little like me, you’ve learned how to be more extroverted as you’ve grown?

JK: Well, I decided I wanted to build bigger projects, and to do that, you have to pretend to be an extrovert, and you have to promote yourself, because there’s a whole bunch of people who are decision-makers who don’t do the work to find out who the best architect is. They’re going to pick who the person that everybody says is the best architect, or the loudest, or the capable. So at some level, if you want to succeed above ‘principal engineer’, you have to understand how to work in the environment of people who play it. Some people are super good at that naturally, so they get pretty high in organizations without much talent, sometimes without much hard work. Then the group of people, Director and above, that you have to deal with have a way different skill set than most of the engineers. So if you want to be part of that gang, even if you’re an engineer, you have to learn how that rolls. It’s not that complicated. Read Shakespeare, Young, a couple of books, Machiavelli, you know, you can learn a lot from that.

 

Security, Ethics, and Group Belief

IC: One of the future aspects of computing is security, and we’ve had a wake of side-channel vulnerabilities. This is a potential can of worms, attacking the tricks that we use to make fast computers. To what extent do you approach those security aspects when you’re designing silicon these days? Are you proactive? Do you find yourself specifically proactive or reactive?

JK: So the market is sort of dictating needs. The funny thing about security first of all is you know it only has to be secure if somebody cares about it. For years, security in an operating system was virtual memory – for a particular process, its virtual memory couldn’t look into another process’s virtual memory. But the code underneath it in the operating system was so complicated that you could trick the operating system into doing something. So basically you started from security by correct software, but once you couldn’t prove the software correct, they started putting additional hardware barriers in there. Now we’re building computers where the operating system can’t see the data the user has, and vice versa. So we’re trying to put these extra boundaries in, but every time you do, you’ve made it a little more complicated.

At some level security worldwide is mostly for security by obscurity, right? Nobody cares about you, in particular, because you’re just one out of 7 billion people. Like somebody could crack your iPhone, but they mostly don’t care about it. There’s a funny arms race going on about this, but it’s definitely kind of incremental. They discovered side-channel attacks, and they weren’t that hard to fix. But there’ll be some other things, and, you know, I’m not a security expert. The overhead of building security features is mostly low. The hard part is thinking that out and deciding what to do. Every once in a while somebody will say something like ‘this is secure, because the software does x’, and I always think, ‘yeah, just wait 10 minutes, and the software will get more complicated, which will introduce a gap in it’. So there needs to be real hardware boundaries in there.

There are lots of computers that are secure, because they don’t talk to anything. Like there are boatloads of places where the computers are usually behind a hard firewall, or literally disconnected from anything. So only physical attacks work, and then they have physical guards. So now, it’s going to be interesting, but it’s not super high in my thinking, I mostly follow what’s going on, and then we’ll just do the right thing. But I have no faith in security by software, let’s say, because that always kind of grows to the point where it kind of violates its own premises. It’s happened many times.

IC: So you’ve worked at Tesla, and when you designed a product specifically for Tesla. You have also worked at companies that sell products for a wide array of uses. Beyond that sort of customer workload analysis, do you consider the myriad of possibilities of what the product you are building will be used for?  Do you consider the ethics behind what it might be used for? Or are you just there to solve the problem of building the chip?

JK: The funny thing about general-purpose computing is it can really be used for anything. So the ethics is more if the net good is better than the net bad. For the most part I think the net good is better than the possible downsides. But people do have serious concerns about this. There’s all a big movement around ethics in AI, and to be honest, the AI capabilities have so far outstripped the thinking around the aspects of that. I don’t know what to think about it.

What the current systems can do is already has stripped us bare, it knows what we think, and what we want, and what we’re doing. Then the question is how many people have that one reason to build a lower-cost AI and programmable AI. We’re talking to quite a large number of AI software startups, that want AI hardware and computing in more people’s hands, because then you have a little mutual standoff situation, as opposed to one winner take all. But the modern tech world has been sort of a winner take all. There are literally several dozen very large companies that have a competitive relationship with each other. So, that’s kind of complicated. I think about it some, but I don’t have anything you know, really good to say, besides, you know the net benefit so far has been a positive. Having technology in more people’s hands rather than a concentrated few seems better, but we’ll see how it plays out.

IC: You’ve worked for a number of big personalities. You know, Elon Musk, Steve Jobs to name two. It appears you still have a strong contact with Elon. Your presence at the Neuralink demo last year with Lex, was not unnoticed. What’s your relationship with Elon now, and was he the one to invite you?

JK: I was invited by somebody in the Neuralink team. I mean Elon, I would say I don’t have a lot of contact with him at the moment. I like the development team there, so I went over to talk to those guys. It was fun.

IC: So you don’t stay in touch with Elon?

JK: No, I haven’t talked to him recently, no.

IC: It was very much a professional, not a personal relationship when you worked for Tesla then?

JK: Yeah.

IC: Because I was going ask about the fact that Elon is a big believer in Cryptocurrency. He regularly discusses it as it pertains to demands of computing and resources, for something that has no intrinsic value. Do you have any opinions as it comes to Cryptocurrency?

JK: Not much. Not really. I mean humans are really weird where they can put value in something like gold, or money, or cryptocurrency, and you know that’s a shared belief contract. What it’s based on, the best I can tell, hasn’t mattered much. I mean the thing the crypto guys like is that it appears to be out of the hands of some central government. Whether that’s true or not, I couldn’t say. Jow that’s going to impact stuff, I have no idea. But as a human, you know, group beliefs are really interesting, because when you’re building things, if you don’t have a group belief that makes sense then you’re not going to get anything done. Group beliefs are super powerful, and they move currencies, politics, companies, technologies, philosophies, self-fulfillment. You name it. So that’s a super interesting topic, but as for the details of Cryptocurrency, I don’t care much about it, except as a manifestation of some kind of psychological phenomena about group beliefs, which is actually interesting. But it seems to be more of a symptom, or a random example let’s say.

 

Chips Made by AI, and Beyond Silicon

IC: In terms of processor design, currently with EDA tools there is some amount of automation in there. Advances in AI and Machine Learning are being expanded into processor design – do you ever envision a time where an AI model can design a purposeful multi-million device or chip that will be unfathomable to human engineers? Would that occur in our lifetime, do you think? 

JK: Yeah, and it’s coming pretty fast. So already the complexity of a high-end AMD, Intel, or Apple chip, is almost unfathomable that any one person. But if you actually go down into details today, you can mostly read the RTL or look at the cell libraries and say, ‘I know what they do’, right? But if you go look inside a neural network that’s been trained and say, why is this weight 0.015843? Nobody knows.

IC: Isn’t that more data than design, though?

JK: Well, somebody told me this. Scientists, traditionally, do a bunch of observations and they go, ‘hey, when I drop a rock, it accelerates like this’. They then calculate how fast it accelerated and then they curve fit, and they realize ‘holy crap, there’s this equation’. Physicists for years have come up with all these equations, and then when they got to relativity, they had to bend space and quantum mechanics, and they had to introduce probability. But still there are mostly understandable equations.

There’s a phenomenon now that a machine learning thing can learn, and predict. Physics is some equation, put inputs, equation outputs, or function output, right? But if there’s a black box there, where the AI networks as inputs, a black box of AI outputs, and you if you looked in the box, you can’t tell what it means. There’s no equation. So now you could say that the design of the neurons is obvious, you know – the little processors, little four teraflop computers, but the design of the weights is not obvious. That’s where the thing is. Now, let’s go use an AI computer to go build an AI calculator, what if you go look inside the AI calculator? You can’t tell why it’s getting a value, and you don’t understand the weight. You don’t understand the math or the circuits underneath them. That’s possible. So now you have two levels of things you don’t understand. But what result do you desire? You might still be designed in the human experience.

Computer designers used to design things with transistors, and now we design things with high-level languages. So those AI things will be building blocks in the future. But it’s pretty weird that there’s going to be parts of science where the function is not intelligible. There used to be physics by explanation, such as if I was Aristotle, 1500 years ago – he was wrong about a whole bunch of stuff. Then there was physics by equation, like Newton, Copernicus, and people like that. Stephen Wolfram says there’s now going to be physics by, by program. There are very few programs that you can write in one equation. Theorems are complicated, and he says, why isn’t physics like that? Well, protein folding in the computing world now we have programmed by AI, which has no intelligible equations, or statements, so why isn’t physics going to do the same thing?

IC: It’s going to be those abstraction layers, down to the transistor. Eventually, each of those layers will be replaced by AI, by some unintelligible black box.

JK: The thing that assembles the transistors will make things that we don’t even understand as devices. It’s like people have been staring at the brain for how many years, they still can’t tell you exactly why the brain does anything.

IC: It’s 20 Watts of fat and salt.

JK: Yeah and they see chemicals go back and forth, and electrical signals move around, and, you know, they’re finding more stuff, but, it’s fairly sophisticated.

IC: I wanted to ask you about going beyond silicon. We’ve been working on silicon now for 50+ years, and the silicon paradigm has been continually optimized. Do you ever think about what’s going to happen beyond silicon, if we ever reach a theoretical limit within our lifetime? Or will anything get there, because it won’t have 50 years of catch-up optimization?

JK: Oh yeah. Computers started, you know, with Abacuses, right? Then mechanical relays. Then vacuum tubes, transistors, and integrated circuits. Now the way we build transistors, it’s like a 12th generation transistor. They’re amazing, and there’s more to do. The optical guys have been actually making some progress, because they can direct light through polysilicon, and do some really interesting switching things. But that’s sort of been 10 years away for 20 years. But they actually seem to be making progress.

It’s like the economics of biology. It’s 100 million times cheaper to make a complicated molecule than it is to make a transistor. The economics are amazing. Once you have something that can replicate proteins – I know a company that makes proteins for a living, and we did the math, and it was literally 100 million times less capital per molecule than we spent on transistors. So when you print transistors it’s something interesting because they’re organized and connected in very sophisticated ways and in arrays. But our bodies are self-organizing – they get the proteins exactly where they need to be. So there’s something amazing about that. There’s so much room, as Feynman said, at the bottom, of how chemicals are made and organized, and how they’re convinced to go a certain way.

I was talking to some guys who were looking at doing a quantum computing startup, and they were using lasers to quiet down atoms, and hold them in 3D grids. It was super cool. So I think we’ve barely scratched the surface on what’s possible. Physics is so complicated and apparently arbitrary that who the hell knows what we’re going to build out of it. So yeah, I think about it. It could be that we need an AI kind of computation in order to organize the atoms in ways that takes us to that next level. But the possibilities are so unbelievable, it’s literally crazy. Yeah I think about that.

 

 

Many thanks to Jim Keller and his team for their time.

Many thanks also to Gavin Bonshor for assistance in transcription,

 

 

AOC AGON AG493UCX, Ketika Hanya 32:9 Yang Akan Dilakukan

Permainan mingguan Togel Singapore 2020 – 2021. Permainan oke punya lainnya ada dipandang dengan terstruktur lewat berita yg kami umumkan di web tersebut, dan juga bisa ditanyakan pada layanan LiveChat support kita yg siaga 24 jam Online dapat mengservis seluruh maksud antara tamu. Mari cepetan daftar, dan ambil bonus Lotere dan Kasino On the internet tergede yang wujud di web kami.

5120 × 1440, Kemampuan KVM Dan Remote Control

Jika monitor ultrawide tradisional tidak cukup untuk Anda, maka mungkin layar AOC AGON AG493UCX 49″ akan memuaskan kegemaran Anda akan piksel. Dengan rasio aspek 32:9 dan 108ppi, Anda akan menemukan bahwa layar ditutup pada 120Hz, memanfaatkan Freesync Pro Premium untuk mencegah robeknya game yang sulit ditampilkan oleh GPU Anda pada monitor VA yang perkasa ini. Input video mencakup sepasang HDMI 2.0 dan DisplayPort 1.4, dengan USB-C yang juga membawa sinyal DisplayPort. Ada juga port USB upstream bersama dengan tiga output yang memungkinkan tampilan berfungsi sebagai KVM berkat perangkat keras yang disertakan. Anda juga dapat mencolokkan headset ke jack audio 3,5 mm, atau Anda dapat mengaktifkan speaker 5W bawaan.

Ada tombol yang terpasang di monitor untuk menangani OSD, namun Kitguru merasa remote yang disertakan membuat opsi berubah lebih mudah daripada tampilan rata-rata Anda. Dalam pengujian mereka, Kitguru menambahkan tolok ukur baru, memanfaatkan LDAT NVIDIA untuk mengukur latensi sistem ujung ke ujung. Dengan latensi rendah yang diaktifkan pada AGON AG493UCX, penundaan rata-rata antara klik mouse yang terjadi dan klik yang ditampilkan di layar adalah 12 ms. Itu adalah peningkatan yang signifikan dari lag 33.4ms tanpa fitur yang diaktifkan sehingga sangat layak untuk diaktifkan. Ini adalah tampilan pertama yang mereka uji dengan alat itu, jadi mereka menawarkan perbandingan jeda input ke tampilan lain, tetapi itu akan berubah seiring waktu.

Lihat seluruh ulasan untuk detail tentang gamut warna, keseragaman kecerahan, dan hasil lainnya.

Verizon mendapatkan Di Situs dengan jaringan 5G pribadi ke perusahaan dan sektor publik

Prize mantap Togel Singapore 2020 – 2021. Prize mantap yang lain-lain tersedia diperhatikan dengan terpola lewat status yg kita letakkan pada website itu, dan juga dapat ditanyakan terhadap layanan LiveChat support kita yang siaga 24 jam On the internet guna mengservis semua kepentingan para tamu. Ayo cepetan daftar, & menangkan diskon Lotto serta Live Casino On-line terbaik yg nyata di situs kami.

Pada Januari 2021, penelitian dari IDC memperkirakan kemungkinan ledakan di jaringan 5G pribadi yang didorong oleh permintaan dari organisasi misi-kritis dan banyak lagi spektrum yang tersedia untuk penggunaan perusahaan. Perusahaan terbaru yang mencoba menguangkan ini adalah Verizon Business dengan sistem jaringan 5G pribadi pertama yang tersedia secara komersial di AS.

Ingin mengendarai gelombang 5G saat teknologi dan kemampuan maju dan berkembang, Verizon Di Situs 5G dirancang untuk menyediakan pelanggan dengan platform yang dapat diskalakan dan dapat disesuaikan untuk memanfaatkan perkembangan teknologi yang sedang berkembang seperti Massive IoT, kecerdasan buatan, pembelajaran mesin, augmented reality, realitas virtual, dan analitik waktu nyata.

Jaringan 5G Verizon Di Situs dirancang dan dikelola khusus oleh penyedia komunikasi dan dirancang untuk memungkinkan pelanggan perusahaan besar dan sektor publik menghadirkan kemampuan 5G Ultra Wideband ke fasilitas dalam atau luar ruangan di mana konektivitas berkecepatan tinggi, berkapasitas tinggi, dan latensi rendah sangat penting, terlepas dari apakah tempat tersebut berada dalam area jangkauan 5G Ultra Wideband publik.

Jaringan pribadi non-mandiri menggabungkan sel kecil 5G Ultra Wideband dengan inti paket LTE dan radio On Site LTE, sehingga pelanggan On Site LTE dapat meningkatkan ke On Site 5G dengan mudah, kata Verizon. Di Situs 5G memanfaatkan kemampuan 5G Ultra Wideband dan 4G LTE terbaik sesuai kebutuhan lingkungan operasional yang berbeda, dan mempertahankan interkoneksi ke LAN, SD-WAN, dan aplikasi perusahaan organisasi.

Sementara semua lalu lintas seluler tetap di lokasi, Di Situs 5G memungkinkan akses pengguna jarak jauh resmi ke aplikasi perusahaan. Akses pengguna resmi yang terkontrol dan manajemen perangkat serta privasi bawaan jaringan lokal dikatakan membantu menjaga keamanan jaringan.

Verizon mengatakan jaringan pribadi On Site yang berdedikasi menawarkan kepada pelanggan kinerja, keamanan, mobilitas, fleksibilitas, dan kemampuan 5G yang konsisten dan dapat diprediksi untuk meningkatkan efisiensi operasional dan mempercepat transformasi digital.

“Di Situs 5G membuka pintu air komersial untuk janji 5G Ultra Wideband, memungkinkan perusahaan besar dan organisasi sektor publik untuk menyesuaikan pengalaman 5G untuk setiap tempat yang membutuhkannya,” kata Sampath Sowmyanarayan, chief revenue officer di Verizon Business. “Ini adalah layanan bisnis yang dipesan lebih dahulu untuk apa yang kami lakukan lebih baik daripada siapa pun – membangun jaringan 5G yang memungkinkan bahkan kemampuan nirkabel, MEC, dan IoT tercanggih untuk pelanggan di ujung tombak.”

Verizon sebelumnya telah membangun solusi 5G khusus untuk inovasi bersama dan pekerjaan eksplorasi dengan Corning Incorporated, Marine Corps Air Station Miramar, Mcity di University of Michigan, WeWork, dan lainnya, dan awal tahun ini memulai penyebaran 5G di Pangkalan Angkatan Udara Tyndall sebagai bagian dari inisiatif penyebaran jaringan yang lebih luas dengan Angkatan Udara AS.

Portofolio alat jaringan dan transformasi digitalnya juga mencakup Internet Bisnis 5G, penawaran konektivitas nirkabel tetap yang menggunakan jaringan 5G Ultra Wideband publik untuk bisnis dari semua ukuran. Yang terakhir sekarang tersedia di beberapa bagian dari 24 kota AS, dengan lebih banyak yang akan diumumkan secara bergulir.

Berikut adalah 9 ide cepat untuk mendapatkan uang yang dapat Anda gunakan untuk membeli lebih banyak tiket lotre — kehidupan lotre

Jackpot oke punya Togel Singapore 2020 – 2021. Prediksi terbaru yang lain-lain ada dilihat dengan terstruktur lewat status yang kita letakkan pada situs tersebut, dan juga bisa dichat pada layanan LiveChat support kita yang tersedia 24 jam Online buat mengservis segala maksud antara pemain. Mari langsung gabung, serta menangkan prize Lotre dan Live Casino On-line terbesar yg tersedia di situs kita.

Anda bahkan bisa menang dua kali, seperti yang ditemukan Harry Black dengan kemenangan ganda $31,7 dengan Lotto 649. FOTO: BCLC

Semakin banyak tiket lotere yang dapat Anda mainkan dengan Sistem Silver Lotto, semakin baik peluang kemenangan Anda. Untuk meningkatkan jumlah tiket lotre yang Anda mampu, saya telah mengumpulkan 9 cara cepat yang dapat Anda gunakan untuk mendapatkan lebih banyak uang:

1) Tingkatkan batas kartu kredit Anda. Pastikan pembayaran tetap sesuai anggaran Anda.

2) Tambahkan kartu kredit lain. Bank selalu ingin membantu Anda, dan bunga mereka adalah imbalannya. Bawa mereka ke atasnya!

3) Memulai bisnis paruh waktu: mengajar alat musik, mengasuh anak, memangkas pohon, menawarkan untuk berjalan-jalan dengan anjing, mengisi layanan pengiriman untuk bisnis di daerah Anda, memperbaiki barang-barang yang rusak, membuat dekorasi kamar, mengunjungi obralan garasi dan membeli dari mereka, kemudian menjualnya di obralan Anda sendiri atau melalui eBay.com. Saya punya teman yang, ketika dia kuliah, belajar cara memperbaiki kereta model. Dia segera meminta mereka datang dari seluruh negeri hanya karena dia adalah satu-satunya spesialis!

4) Cari dana yang tidak diklaim. Anda mungkin berutang uang kepada pemerintah jika Anda pindah dalam 15 tahun terakhir, berganti pekerjaan, menikah atau bercerai, memiliki kerabat yang sudah meninggal, membayar pajak, atau membeli saham atau obligasi. Satu dari 10 orang berhutang uang, tetapi Anda harus bertanya. Cek di sini: http://www.governmentgrant.com/

Dr. Kevin Zhang and Dr. Maria Marced

Hadiah mingguan Togel Singapore 2020 – 2021. Prize oke punya yang lain-lain ada diamati secara terpola melalui informasi yang kami tempatkan pada laman tersebut, dan juga siap ditanyakan terhadap petugas LiveChat support kami yang ada 24 jam On the internet buat meladeni semua kepentingan para pengunjung. Mari langsung sign-up, serta menangkan diskon Togel dan Live Casino Online terhebat yang tersedia di web kita.

In the past week, TSMC ran its 2021 Technology Symposium, covering its latest developments in process node technology designed to improve the performance, costs, and capabilities for its customers. In this event, TSMC discussed its increasing use of Extreme Ultra Violet (EUV) lithography for manufacturing, enabling it to scale down to its 3nm process node, well beyond that of its competitors. TSMC also addressed the current issues surrounding demand for semiconductors, along with announcing that it is building new facilities for advanced packaging production. Joining CEO Dr. CC Wei as part of the keynote presentation was AMD’s CEO Dr. Lisa Su, Qualcomm’s President (and soon to be CEO) Cristiano Amon, and Ambiq’s Founder and CTO Scott Hanson.

As part of the proceedings, TSMC offered AnandTech a 30-minute interview with Dr. Kevin Zhang, SVP of Business Development, and Dr. Maria Marced, President of TSMC EU, as an opportunity to learn more about TSMC’s driving directions as well as cooperation with industry partners. TSMC did request that we keep the questioning solely on technology matters and related to the announcements at its Technology Symposium, rather than discuss current global political topics.



Kevin Zhang

SVP Business Development

Maria Marced

President, TSMC EU

Ian Cutress

AnandTech

Dr. Kevin Zhang has been TSMC’s Senior Vice President of Business Development for almost a year, having being promoted from the Design Technology Team. Previous to joining TSMC, Dr. Zhang spent 11 years at Intel as an Intel Fellow, becoming Vice President of the Technology and Manufacturing Group as well as Director of Circuit Technology. Dr. Zhang has published over 80 papers in technical conferences and research journals, holds 55 patents in integrated circuit technology, and holds a PhD in Electrical Engineering. Dr. Zhang will be the conference chair for ISSCC 2022.

Dr. Maria Marced is President of TSMC Europe, responsible for driving strategy and development of the company in the region, and has been in the position since 2007. Prior to this, Dr. Marced spent four years at NXP and 19 years at Intel in similar top-position roles. Dr. Marced serves as the Chairwoman of the EMEA Leadership Council of the GSA (Global Semiconductor Alliance), is on the board of CEVA, and holds a PhD in Telecommunications Engineering.

 

TSMC on The Leading Edge

Ian Cutress: TSMC has stated that it has had in-house EUV pellicle production since 2019, and TSMC is now vastly ramping up production of pellicles. How extensive is the use in manufacturing, and how does it further TSMC’s competitive advantage versus other fabs?

Kevin Zhang: We clearly have invested in this area in-house, and I think it is a very unique technology for us. We are able to leverage it to bring up our EUV mass production. If you look at the way we ran our 7 nm, on the 6 nm, and now in 5 nm, all with the EUV, clearly we have made tremendous progress. So this is definitely an area we think we have done well with our unique technology advantage.

Maria Marced: One thing, because I am here in Amsterdam, so we are relatively close to ASML – we have had special training by them. I can tell you, having this production in-house really allows us to extend the life of the masks. Typically in EUV, the mask gets dirty, and therefore, with short deadlines, this really helps us a lot to improve the productivity of EUV and the masks.

IC: So by having it on site means you’ve got less travel for the masks, and it gets less dirty due to less travel?

MM: That’s correct.

 

IC: We typically associate complex and specialty technologies with leading edge customers. Given that these customers are often in the low single digits, how does TSMC balance what packaging technologies to develop that customers need, compared to developing and pathfinding new technologies?

KZ: At the leading node, for example, we have been a leader – a technology leader. We want to continue to drive advancement of silicon technology, and we partner with our lead customers to optimize our technology. So this is definitely an area we continue to drive the future growth. But with that being said, I still think specialty technology also plays a very important role in our overall technology offering to our customers.  Many of our customers can’t ship a single chip based on let’s say, 5 nm, without maybe a 20 nm companion chip. If you look at a phone for example, there are multiple chips, and many companion chips. It is the same thing with automotive – you have an advanced chip there, but you also need a lot of microcontrollers based on mature technology. 

So I think we have been doing a good job in balancing our overall technology development effort. We have invested in mature technology substantially over the past decades. If you look at our overall technology roadmap, we are providing the most advanced specialty technology offering today in the marketplace. I think Maria maybe will add some color from a Europe perspective.

MM: The only thing I will add is that it is very important for us to understand the system complexity of our customers. Also, especially by having these technologies that complete the bill of materials of the whole system, it helps us to better understand how the system architectures are evolving, and therefore do a better job for our customers.

IC: So how much of that goes down to what customers are specifically demanding, versus researching new technologies that customers don’t know they need yet?

KZ: We have a separate team, for example, from an organization point of view. We have a separate team, a research team looking at things beyond the next generation. We really look far out for that to explore different things. It also requires lots of market input, and customer input. to help guide some of the exploratory work. So this is a pretty dynamic process, very interactive between us, within us, and between us and the customers.

 

IC: TSMC has been very clear in saying that it is staying with FinFET, down to 3 nm, and moving to Gate-All-Around at 2 nm. By contrast, the competition is moving to GAA at an earlier stage of development. Can you describe how TSMC is weighing both its desire to be on the leading edge of these advanced technologies, but also maintaining the same FinFET for its production lines?

KZ: The reason we chose FinFET technology as a 3 nm is based on two things. 

One, we have to figure out a way to improve the technology into more power efficiency, more performance, and the internal density. In the end the customer doesn’t care whether it is FinFET, or Nanosheet. They want to look at it from their product point of view – about what kind of power, performance, and density benefit it can bring to the customer. That’s the most important thing in the end.  So we look at our FinFET technology, and we look at our innovation capability, we find a very very powerful knob, an innovative knob, that allows us to extend the FinFET technology down to 3 nm while also achieving substantial full node scaling benefits. So that’s reason number one. 

Reason number two is also the schedule. We want to make sure, at the right given time, we are able to deliver the most advanced technology. So predictability, from an advanced technology development point of view, is very very important. Our customers take scheduling very seriously! So combining the two, we made a decision to stay with the FinFET at 3nm. We believe in the 2022-2023 timeframe, our 3 nm will bring the most advanced logic technology to the marketplace.

 

 

IC: How do you balance pushing process density versus design complexity, such as 1D versus 2D metal routing? What are the current possibilities on leading nodes?

KZ: We look at all the different metrics. At the end of the day, we consider really what it is at a product level, at a system level, and what kind of overall scaling benefit we can bring the customer. When I talk about the scaling benefits, I refer to overall power/performance and the costs. This has to be done at the system level, it’s not simply at the chip level. 

In the past, two dimensional scaling dominated everything, but now we have to look more at a system level. For example, you notice that we spend a lot of effort and investment developing chip level integration schemes: we have 2D, 2.5D, and 3D going forward. This all comes into play to basically provide a complete system level solution for the future. I think you will see more and more applications based on sophisticated advanced chip level integration technology. Transistor development continues to be important, make no mistake, and this will continue to be very very important. Providing the customer the best energy efficient transistors is still very very important, but that’s not going to be sufficient. 

We look at overall scaling at the system level. So lots of co-optimization between technology, different aspects of the technology, and the product system level design.

 

IC: As process nodes shrink, resistance on metal layers is becoming more problematic. With regards innovative solutions, and exotic materials versus copper interconnects, is it just a case of more research down that front? Or do we need to put more effort into increasing and routing higher metal layers?

KZ: I think in the research session at our advanced technology introduction, we did cover a little bit about the back end work. For example, we are continuing to optimize the copper grain boundary to bring a lower resistance metal line to our overall chip technology and new technology. Also, with dielectrics we continue to find innovative materials to improve the dielectric in parasitic capacitance. So, those things are being actively researched. 

The 3D integration can also bring an alternative solution to this whole performance requirement in the back-end. You can instead route from A to B in a 2 dimensional space, or you can route A to B vertically in 3 dimensions. In some cases, by going vertical, you can reduce the overall length of the RC wire, and reduce pass delay significantly. So all those things have to be looked at going forward.

IC: So looking into new technologies, the two most promising technologies as we go beyond Gate-All-Around are 2D transistors and carbon nanotubes. TSMC recently launched a paper that got a lot of press regarding new developments on 2D transistors. Can you comment on what looks more promising?

KZ: All those advanced materials for transistors have certain advantages. That’s why we spend early R&D efforts to explore those, but those are still pretty far out. There are a lot of things that still need to be better understood, especially as it will take tremendous effort to bring those kinds of new materials, and new structures, into a large-scale manufacturing base. So there is still a lot of work ahead of us. But the good thing is we are not lacking in  new ideas. There are lots of new things we’re exploring, and they all have a certain advantage. So we just need to figure out how to integrate them all together to bring out the most compelling overall technology solution to our future customers’ applications.

IC: Regarding the research in collaboration with ASML, they’ve spoken about future EUV developments such as high NA (numerical aperture) optics. They keep talking to us about it! But as an extension of that, can you speak about what TSMC is necessarily doing regarding post-EUV technologies?

KZ: We look at all different technology options. We talked a little bit at the conference about material innovations to bring new materials integrated on the silicon to allow us to achieve better conduction and a more power efficient transistor. Those are important areas in which we have a research team and an early R&D team to develop and explore all different options. 

On lithography, obviously it continues to be a very important part in scaling the geometry. So we do have a team also looking into how to maximize EUV, to print even tighter pitches going forward. All those things are being explored for the future of technology options.

 

 

Going Beyond Asia

IC: With regards to these most advanced technologies, and leading edge capabilities, for Europe we’ve heard that the competitors of TSMC are investing in their European facilities. We’ve not necessarily heard the same from TSMC. Is there a particular reason for this? Or is there something that is to be announced? 

MM: Well, we wouldn’t rule out anything. However, today, I do not have any details to share with you!

 

IC: On your European customers – only a few of them are kind of on the leading edge? Most of them rely on the older process technologies, specialty technologies – such as the big automotive industry in Germany. We don’t necessarily see that there’s a lot of desire to go leading edge from European business. Can you comment?

MM: The main segments in Europe are automotive, industrial, but also where Europe is very important as well is in the Internet of Things (IoT). The technologies that are required by those segments are more on the specialty side, and more on the advanced, not only mature, but advanced technologies.

IC: We’re obviously seeing a lot of semiconductor demand for AI. Lots of customers want leading edge solutions, but there is also lots of demand for edge products on the more mature nodes. Can you talk to developments on how demand is shifting when it comes to AI, and perhaps a mention of the EU given that China and North America gets the spotlight? 

MM: Well, even in the UK, you have good AI companies! You know that one of them is one of our early adopter customers in several initiatives (Ian: Graphcore already announced working with TSMC at 3nm). But also in Israel, we see a lot of activity around AI. So in EMEA we see a lot of interest in artificial intelligence, and even the EU has some activity leaning towards what they call the European Processor Initiative (EPI), which revolves around the use of artificial intelligence. 

So yes, we see a lot of activity. Actually, today, I was very proud that during my presentation at the conference, I got an email from Matteo Vallejo from the University in Barcelona, which is very much involved in AI. So of course, China and the US are always very advanced in high performance computing, but we also see a lot of interest in Europe, and a lot of VC money in AI.

IC: TSMC likes to promote where the revenue is coming from, and the proportion of revenue it receives from North America seems to be increasing, at the expense of the proportion from Europe. Are there any headwinds or tailwinds about Europe that we should think about? 

MM: The main reason I think is that because Europe’s main segments are automotive, industrial, and IoT. Those segments are still using mature advanced technologies, and specialty technologies, and a lot of these end-products have a lower ASP, and this creates a big difference in terms of the percentage of revenues. How is this moving forward? Fast, because automotive, as well as industrial, especially as part of industry 4.0 and IoT, are because AI offerings are moving fast towards more advanced, more leading edge technology. So I really expect that this proportion is going to change significantly in the future.

 

IC: TSMC has three main geographical regions – TSMC Asia, TSMC North America, and TSMC EMEA. How much are these independent organization from each other – how much collaboration occurs? Is it right to be split, given that companies often work worldwide or in multiple markets?

MM: Oh this is an interesting question! I really believe in companies being centralized, having worked at Intel for many years (Maria was at Intel for 19 years), I really believe in companies being centralized and having one direction coming from the leadership of the company. So we are not independent organizations at all! We are very dependent on each other, and I can tell you I usually spend most of my time traveling to Taiwan – now video conferencing with Taiwan. But absolutely we have one direction, and we complement each other very well. I think Europe is bringing something different, which is focused more on and around specialties. This is where we play the key role. We are really one company with one direction.

 

 

 

Building and Expanding

IC: Pivoting to packaging, the CEO mentioned that there are five fabs enabling SoIC and a new fab in Chunan with more capacity. Normally we measure production fabs in wafers per month, so how should we consider the throughput of these new SoIC facilities?

KZ: I can’t give you a specific capacity number, but all that I can say is that we’re really investing in our backend capability and capacities. This is because we do see a trend that more and more customers want to leverage our advanced packaging options, including CoWoS, InFO, and going forward, SoIC with 3D integration. So that’s why we’re investing not only in the R&D side, but we’re also investing in the capacity side to prepare for future growth.

With 3D packaging capacity metrics, it depends on what kind of configuration you do. Sometimes you could potentially do a very advanced chiplet integrated with more mature nodes, and minus one or minus two nodes, so that you probably have to compute the total volume differently. It all depends on the specific product configuration. Maybe in the future, we’ll have to figure out a way to better measure the volume and report the numbers. For the definition volume, when you come to 3D integration, it may be that we count the number of final integrated parts.

IC: TSMC currently has four packaging facilities, and this fifth one (called AP6) is being built in Chunan. AP6 would have over 50% of the packaging capacity of TSMC globally. Are there any positive or negative implications for having a large portion of all TSMC’s packaging in one area? 

KZ: We do a lot of balancing – there’s an advantage for us by building a larger scale manufacturing facility. I think you probably already know that we build Gigafabs today with large scales. I think that that’s a key economic benefit that we can bring to our customers, enabling lower costs that will also be passed on to the customer. But we do have to have considerations on how to spread that to different locations. We are doing that, making sure we maintain a certain balance. In a similar light, we are building a factory in Arizona pretty far away from Taiwan! 

IC: Speaking about packaging and OSAT bottlenecks. When speaking with our audience, a number of them seem to think it is wafer throughput, and some of them believe it comes down to packaging throughput. I don’t necessarily want to ask you about which one is the most bottlenecked but I do want to ask about how TSMC is improving throughput of customer orders. We’ve spoken about TSMC expanding its packaging, but is there anything TSMC can do here?

KZ: I think on the technical part, your readers want to view it as either the wafer technology is the bottleneck, or the packaging technologies are the bottleneck. Actually the way I look at it is how we are finding the optimum solution to bring all the pieces together at a system level to provide the best result. If you look back at semiconductor technology, it started with two dimensional things, and Moore’s law is about the transistor density, scaling, and economics. But now as we’re moving forward, I see the whole industry trending as we move towards a higher level of integration. In technical conferences, such as ISSCC, you see people not only talk about transistor level design, but they also talk about system level performance, and how to bring all of the functions and all the pieces together. In the future, I think that this trend will continue, so it’s really about working with your customers for their given product application, given their unique system level requirements, and how you bring all of the pieces together in an optimum fashion. This is how I look at it in the future.

MM: Our key thing is that we collaborate to innovate. Collaborating with our customers is our best way to really innovate, allowing their innovations and at the same time boosting our own innovations.

IC: DTCO (Design Technology Co-Optimization) is an integral part of making most out of leading edge technologies. Is DTCO getting more complex, or as TSMC and its customers understand the process behind achieving good DTCO, is it accelerating? Can you talk about that?

KZ: I think our customers have benefited greatly from design technology co-optimization over the last couple of generations. Going forward, there will be more DTCO to do, and we find our customers are more eager and willing to collaborate with us in order to harvest intrinsic technology benefits. I think this trend will continue and I think the effort even then will be stronger going forward. As they need to be more intertwined between technology and the design, you could call it harder? I think it will become more delicate, and we need to work closer with our customers to really optimize things together. Now you also have advanced packaging coming in, so how a system can partition its technology could vary. If you have a chiplet, how you architect it at a system level from the get-go, you need to think about how to architect your system in the right way by leveraging different pieces of silicon technology and different integration schemes.

Many thanks to Kevin, Maria, and TSMC’s Comms teams for their time.

Also thanks to Gavin Bonshor for transcription.

Podcast #631 – Ulasan 3080 Ti, AMD FSR, RDNA 2 Mobile, Ritel 5000 APU dan COMPUTEX! + lainnya

Promo oke punya Pengeluaran SGP 2020 – 2021. Prediksi menarik lain-lain hadir dipandang dengan terprogram lewat kabar yang kami sisipkan pada laman itu, lalu juga siap ditanyakan terhadap teknisi LiveChat pendukung kita yg tersedia 24 jam Online buat melayani semua keperluan antara pemain. Ayo segera join, dan menangkan diskon Toto serta Live Casino On-line terhebat yg wujud di laman kami.

Itu bukan burger. Ada GPU yang tidak tersedia. Ada sesuatu yang bukan pesaing DLSS. Dan terbungkus dalam paket podcast itu.

Kami mendorong Anda untuk memeriksa sponsor kami minggu ini, MULAI – lihat cara menurunkan pembayaran bulanan dan menangani kredit Anda.
Dan ClearME, cara termudah dan tercepat untuk “membersihkan” jalan Anda melalui keamanan di bandara dan acara. Pelajari lebih lanjut dan dapatkan dua bulan pertama Clear gratis dengan tautan kami!

Sekali lagi terima kasih kepada semua anggota Patreon kami yang sangat membantu. Pertunjukan ini tidak dapat berlangsung tanpa dukungan Anda yang berkelanjutan, mohon pertimbangkan untuk membantu upaya kami. Saya t paling pasti membantu menjaga kita semua berjalan dan tagihan dibayar. Terima kasih!

Inklusivitas dan partisipasi menjadi isu utama bagi karyawan dalam new normal

Hadiah terbaik Keluaran SGP 2020 – 2021. Promo mingguan lain-lain hadir diamati secara terjadwal lewat pengumuman yang kami sampaikan pada situs itu, lalu juga dapat dichat pada teknisi LiveChat support kita yang menjaga 24 jam On the internet guna melayani seluruh keperluan antara player. Ayo segera gabung, serta kenakan promo Lotto & Kasino On the internet terbaik yg ada di website kami.

Ketika perusahaan beralih dari bekerja sepenuhnya dari jarak jauh ke lingkungan hibrida, sebuah studi dari platform komunikasi visual online Canva telah mengungkapkan bahwa sebagian besar tenaga kerja saat ini menganggap kolaborasi antara rekan kerja jarak jauh dan di kantor adalah sebuah tantangan (78%) dan bahwa mereka membutuhkan teknologi baru dan lebih baik untuk kolaborasi virtual yang efektif (84%).

Studi Canva mengambil pandangan dari 1.000 karyawan penuh waktu AS untuk menjelaskan pergeseran yang telah terjadi di tempat kerja dan apa artinya jangka panjang saat negara tersebut keluar dari pandemi Covid-19 terburuk, dan sebagai perusahaan mencoba menemukan keseimbangan antara pekerjaan jarak jauh dan langsung.

Canva mengatakan dinamika baru menawarkan peluang “luar biasa” bagi karyawan, memungkinkan mereka memilih lingkungan kerja yang paling sesuai dengan kebutuhan individu mereka. Ia menambahkan bahwa untuk pertama kalinya, tempat kerja yang sepenuhnya tatap muka akan menjadi pengecualian daripada aturan, menandai pergeseran paling seismik ke budaya kantor dalam beberapa dekade. Bisnis di AS memiliki kesempatan sekali dalam satu generasi untuk mendefinisikan kembali pekerjaan, karena lingkungan hibrida mendefinisikan tempat kerja modern, katanya.

Tantangan tempat kerja hibrid adalah memfasilitasi komunikasi yang efektif antara karyawan jarak jauh dan berbasis kantor tanpa orang harus berjuang untuk mengelola daftar pesan dan pemberitahuan baru yang terus bertambah, kata Canva. Ini membutuhkan perangkat lunak yang menyatukan kreasi dan kolaborasi di satu tempat, tambahnya.

Menurut penelitian, 21% karyawan akan terus bekerja sepenuhnya dari jarak jauh setelah pandemi, dan 38% mengharapkan campuran pekerjaan jarak jauh dan di kantor. Namun meskipun pandemi melihat sebagian besar tempat kerja berhasil memigrasikan komunikasi online, melalui panggilan video, email, pesan instan, dan sejumlah saluran dan alat lain, lebih dari tiga perempat karyawan merasa kolaborasi kerja selama pandemi lebih menantang daripada sebelumnya. Hampir setengah (45%) mengatakan mereka kesulitan saat menggunakan perangkat lunak presentasi, yang di seluruh perusahaan berarti penurunan produktivitas yang signifikan.

Kabar baiknya adalah survei menunjukkan 54% karyawan merasa lebih mudah untuk bertukar pikiran, berpartisipasi, dan berkolaborasi dalam ide dan tugas saat bekerja dari jarak jauh, yang berarti teknologi sebenarnya dapat membantu orang untuk berkomunikasi dan berkontribusi dengan lebih mudah. Kuncinya adalah memastikan manfaat ini dipertahankan karena semakin banyak orang yang kembali bekerja dan aktivitas tatap muka meningkat.

Selain alat komunikasi, 85% responden setuju bahwa untuk membuat pekerjaan jarak jauh berhasil, mereka membutuhkan perangkat lunak yang mendorong kreativitas dan melibatkan audiens. Banyak yang beralih ke presentasi sebagai bentuk komunikasi yang disukai, dengan data menunjukkan peningkatan hingga 10% dalam enam bulan ke depan saja.

Namun dalam lingkungan kerja di mana komunikasi visual dan interaktivitas ada di mana-mana secara online, tempat kerja harus melampaui slide yang membosankan dan tidak menarik untuk menjaga perhatian orang. Penelitian Canva menunjukkan bahwa 89% mengaku sering melakukan banyak tugas atau menjadi terganggu saat orang lain melakukan presentasi. Beberapa gangguan teratas termasuk membaca email dan pesan (42%), menyelesaikan pekerjaan (28%), menggulir media sosial (28%) dan bahkan belanja online (25%).

Perangkat lunak sebagai solusi adalah kebutuhan yang mendesak, tetapi kepala penginjil Canva Guy Kawasaki memperingatkan bahwa perangkat lunak dan alat warisan yang khas tidak dirancang untuk memenuhi permintaan tenaga kerja hibrida yang muncul setelah pandemi.

“Kolaborasi online adalah inti dari hampir semua pekerjaan dan karyawan membutuhkan alat yang mempromosikan kreativitas, mempermudah kolaborasi, dan menyatukan orang,” katanya. “Sebagian besar perangkat lunak yang digunakan saat ini dirancang terutama untuk tempat kerja fisik, dengan antarmuka pengguna inti yang menunjukkan sedikit modernisasi dalam beberapa dekade.

“Sekarang pekerjaan hibrida ada di sini untuk tinggal, aplikasi tempat kerja harus merangkul dunia online, dan kemampuan untuk membangun konten yang dinamis dan dapat dibagikan.”

Arizona Lottery Meluncurkan Game Bertema Jackpot – La Fleur’s Lottery World

Cashback khusus Keluaran SGP 2020 – 2021. Permainan oke punya lainnya tampak dilihat dengan terencana lewat pemberitahuan yg kita umumkan dalam laman itu, dan juga siap dichat terhadap petugas LiveChat pendukung kita yang ada 24 jam Online buat mengservis semua maksud para pengunjung. Mari buruan sign-up, & menangkan diskon Toto dan Live Casino On-line tergede yang ada di lokasi kami.

Arizona Lottery meluncurkan Keluarga Musim Panas baru dari permainan bertema jackpot dengan harga $1, $2, $5, dan $10.

Peluncuran Juni dari game bertema jackpot instan yang selalu populer akan meningkatkan penjualan di bulan terakhir tahun fiskal 2021. “Kami meluncurkan keluarga game musim panas kami pada Juni 2021 untuk meraih penjualan sebanyak mungkin sebelum penutupan tahun fiskal kami. . Kampanye ini akan berjalan sepanjang tahun fiskal untuk membantu memulai awal yang kuat untuk tahun fiskal 2022,” kata Chris Rogers, Wakil Direktur, Pemasaran dan Produk Arizona Lottery.

“Kelompok permainan akan disertai dengan promosi Progresif Jackpot Musim Panas di mana jackpot progresif akan bertambah setiap kali tiket yang memenuhi syarat dimasukkan. Jackpot akan mulai dari $ 25.000 dan dapat tumbuh hingga maksimum $ 150.000, ”tambahnya.

Selain itu, Arizona Lottery menjualnya sebagai bagian dari Fast Play. “Manfaat tambahan dari fokus pada tiket Scratchers bertema jackpot, adalah bahwa Arizona Lottery juga memiliki deretan tiket Fast Play jackpot progresif,” tambah Rogers. “Kami telah membuat semua game Jackpot Scratchers dan Progressive Jackpot Fast Play memenuhi syarat untuk promosi. Dengan mempromosikan silang game Scratchers dan Fast Play di kampanye kami, kami dapat membawa kesadaran maksimum ke lini produk yang jika tidak, kami tidak akan dapat mendukung dengan uang pemasaran.

Lotre akan mempromosikan keluarga musim panas baru dari tiket instan bertema Jackpot disertai dengan promosi di Klub Pemainnya. “Kelompok permainan akan disertai dengan promosi Progresif Jackpot Musim Panas di mana jackpot progresif akan bertambah setiap kali tiket yang memenuhi syarat dimasukkan. Jackpot akan mulai dari $ 25.000 dan dapat tumbuh hingga maksimum $ 150.000, ”kata Shelby Alessi, Manajer Pemasaran Lotre Arizona.

Iklan tradisional juga akan mendukung tema liburan musim panas. “Kami terus mengambil pendekatan yang lebih menyenangkan untuk kampanye iklan kami di radio dan TV untuk mempromosikan kenangan dan kebersamaan musim panas,” kata Rogers.

Dalam hal saluran sosial, Alessi menambahkan bahwa “kami akan terus menjalankan streaming langsung dan kampanye sosial berbayar untuk membuat audiens kami tetap terlibat.”

192 MB dengan kecepatan 2 TB / dtk

Cashback mantap Pengeluaran SGP 2020 – 2021. Prize mantap yang lain-lain hadir dilihat secara terencana lewat poster yg kita umumkan dalam web ini, lalu juga dapat dichat pada petugas LiveChat pendukung kita yg tersedia 24 jam Online guna melayani semua maksud antara visitor. Mari secepatnya daftar, serta menangkan promo Buntut dan Kasino On-line terhebat yg terdapat di lokasi kami.

Tim AMD mengejutkan kami di sini. Apa yang tampak seperti keynote Computex yang sangat par-for-the-course berubah menjadi demonstrasi luar biasa tentang apa yang sedang diuji AMD di lab dengan teknologi Fabric 3D baru TSMC. Kami telah membahas Fabric 3D sebelumnya, tetapi AMD memanfaatkannya dengan baik dengan menumpuk prosesornya dengan cache tambahan, memungkinkan bandwidth super cepat, dan kinerja game yang lebih baik. Bagaimanapun, itulah klaimnya, dan AMD memamerkan prosesor demo barunya di atas panggung di Computex. Berikut ini rincian yang lebih dalam tentang apa itu sebenarnya.

Chiplets 3D: Langkah Berikutnya

AMD mengumumkan sedang melihat teknologi penumpukan 3D dengan ‘X3D’ pada Maret 2020 di Hari Analis Keuangan, dengan diagram yang sangat aneh yang menunjukkan prosesor chiplet dengan apa yang tampak seperti tumpukan HBM atau semacam memori di luar. Pada saat itu AMD mengatakan itu adalah campuran dari teknologi pengemasan 2.5D dan 3D yang memungkinkan kepadatan bandwidth 10x atau lebih tinggi. ‘X’ dalam ‘X3D’ dimaksudkan untuk mewakili Hybrid, dan teknologinya ditetapkan untuk ‘masa depan’. Sejak saat itu TSMC telah mengumumkan jajaran teknologi Fabric 3D, nama yang luas untuk kombinasi penawaran integrasi 2.5D dan 3D.

Hari ini AMD mempresentasikan tahap pertama dari perjalanan chiplet 3D-nya. Aplikasi pertama adalah cache bertumpuk di atas chiplet prosesor standar. Di atas panggung, Lisa Su memamerkan salah satu prosesor dual-chiplet AMD Ryzen 5000 dengan inti Zen 3. Pada salah satu chiplet komputasi, SRAM 64 MB yang dibangun di atas 7nm TSMC telah terintegrasi di atasnya, secara efektif melipatgandakan jumlah cache yang dapat diakses oleh inti.

Itu berarti chiplet Ryzen 5000 asli, dengan delapan inti yang memiliki akses ke cache L3 32 MB, sekarang menjadi kompleks delapan inti dengan akses ke cache L3 96 MB. Kedua die terikat dengan Through Silicon Vias (TSVs), melewatkan daya dan data di antara keduanya. AMD mengklaim bahwa total bandwidth cache L3 meningkat hingga melebihi 2 TB / detik, yang secara teknis akan lebih cepat daripada cache L1 saat die (tetapi dengan latensi yang lebih tinggi).

Sebagai bagian dari diagram chip, TSV akan menjadi ikatan tembaga-ke-tembaga langsung. Cache die tidak berukuran sama dengan core complex, dan sebagai akibatnya diperlukan silikon struktural tambahan untuk memastikan bahwa ada tekanan yang sama di kedua die komputasi bawah dan die cache atas. Kedua cetakan ditipiskan, dengan tujuan untuk mengaktifkan chiplet baru dalam substrat yang sama dan teknologi heatspreader yang saat ini digunakan dalam prosesor Ryzen 5000.

Prototipe prosesor yang ditampilkan di atas panggung memiliki salah satu chipletnya yang menggunakan teknologi caching baru ini. Chiplet lainnya dibiarkan sebagai standar untuk menunjukkan perbedaannya, dan satu chiplet yang cache die ‘terbuka’ membuatnya jelas dan sebanding dengan chiplet non-terintegrasi biasa. CEO Dr. Lisa Su mengatakan bahwa 64 MB SRAM dalam casing ini adalah desain 6mm x 6mm (36 mm2), yang menempatkannya di bawah setengah dari area cetakan chiplet Zen 3 penuh.

Dalam produk lengkapnya, Lisa menjelaskan bahwa semua chiplet akan mengaktifkan cache yang ditumpuk, untuk cache 96 MB per chiplet, atau total 192 MB untuk prosesor seperti ini yang memiliki 12 atau 16 core.

Sebagai bagian dari teknologi, dijelaskan bahwa kemasan ini memungkinkan kepadatan interkoneksi> 200x dibandingkan dengan kemasan 2D biasa (yang sudah kami ketahui dari penumpukan HBM), peningkatan kepadatan> 15x dibandingkan dengan teknologi microbump (bidikan langsung melintasi Foveros Intel), dan efisiensi interkoneksi> 3x lebih baik dibandingkan microbumps. Antarmuka TSV adalah interkoneksi tembaga die-to-die langsung, yang berarti bahwa AMD menggunakan teknologi Chip-on-Wafer TSMC. Dr Su mengklaim di atas panggung bahwa fitur-fitur ini menjadikannya teknologi penumpukan chip ‘aktif-aktif’ yang paling canggih dan fleksibel di industri.

Adapun demonstrasi kinerja, AMD membandingkan sebelum dan sesudah menggunakan Gears of War 5. Di satu sisi adalah prosesor standar Ryzen 9 5900X 12-core, sementara yang lain adalah prototipe menggunakan 3D V-Cache baru yang dibangun di atas Ryzen 9 5900X . Kedua prosesor diperbaiki pada 4 GHz, dan dipasangkan dengan kartu grafis tanpa nama.

Dalam skenario ini, titik perbandingannya adalah bahwa satu prosesor memiliki cache L3 64 MB, sedangkan yang lain memiliki cache L3 192 MB. Salah satu nilai jual prosesor Ryzen 5000 adalah cache L3 yang diperluas yang tersedia untuk setiap prosesor untuk membantu kinerja gaming, dan memindahkannya hingga 96 MB per chiplet memperluas keunggulan itu lebih jauh, dengan AMD menunjukkan peningkatan FPS + 12% (184 FPS vs 206 FPS) dengan peningkatan ukuran cache pada 1080p. Selama serangkaian game, AMD mengklaim kinerja gaming rata-rata + 15%:

  • DOTA2 (Vulkan): + 18%
  • Gigi 5 (DX12): + 12%
  • Monster Hunter World (DX11): + 25%
  • League of Legends (DX11): + 4%
  • Fortnite (DX12): + 17%

Ini bukan daftar lengkap dengan cara apa pun, tetapi ini membuat bacaan yang menarik. Klaim AMD di sini adalah bahwa peningkatan + 15% mirip dengan lompatan generasi arsitektur penuh, yang secara efektif memungkinkan peningkatan langka melalui perbedaan desain filosofis. Di sini di AnandTech Kami ingin mencatat bahwa karena semakin sulit untuk menelusuri node proses baru, peningkatan desain filosofis mungkin menjadi pendorong utama kinerja masa depan.

AMD mengatakan bahwa mereka telah membuat langkah besar dengan teknologinya, dan akan memasukkannya ke dalam produksi dengan prosesor kelas atas pada akhir tahun ini. Tidak disebutkan produk apa yang akan datang, apakah itu konsumen atau perusahaan. Menyusul ini, AMD mengatakan bahwa Zen 4 akan diluncurkan pada tahun 2022.

Analisis AnandTech

Itu tidak terduga. Kami tahu bahwa AMD akan berinvestasi dalam teknologi Fabric 3D TSMC, tapi saya rasa kami tidak berharap akan secepat ini atau dengan demo pada prosesor desktop terlebih dahulu.

Dimulai dengan teknologi, ini jelas TSMC’s SoIC Chip-on-Wafer beraksi, meskipun hanya dengan dua lapisan. TSMC telah mendemonstrasikan dua belas lapisan, namun itu adalah lapisan non-aktif. Masalah dengan penumpukan silikon akan berada dalam aktivitas, dan selanjutnya termal. Kami telah melihat dengan perangkat keras bertumpuk TSV lainnya, seperti HBM, bahwa SRAM / memori / cache adalah kendaraan yang sempurna untuk ini karena tidak ditambahkan bahwa banyak untuk persyaratan termal prosesor. Sisi negatifnya adalah bahwa cache yang Anda susun di atas sedikit lebih dari sekadar cache.

Di sinilah tumpukan AMD dan Intel berbeda. Dengan menggunakan TSV daripada microbump, AMD bisa mendapatkan bandwidth yang lebih besar dan efisiensi daya dari TSV, tetapi juga menumpuk banyak chiplet jika diperlukan. TSV dapat membawa daya dan data, tetapi Anda masih harus merancang keduanya untuk pensinyalan silang. Teknologi Intel Foveros, selain juga merupakan penumpukan 3D, ia bergantung pada microbump di antara kedua chiplet tersebut. Ini lebih besar dan haus daya, tetapi memungkinkan Intel untuk menempatkan logika pada dadu bawah dan dadu atas. Elemen lainnya adalah termal – biasanya Anda ingin logika di atas die untuk mengelola termal lebih baik karena dekat dengan heatspreader / heatsink, tetapi logika bergerak lebih jauh dari substrat berarti daya harus diangkut ke die atas . Intel berharap dapat menggabungkan microbump dan TSV dalam teknologi yang akan datang, dan TSMC memiliki peta jalan yang sama untuk masa depan pelanggannya.

Beralih ke chiplet itu sendiri, diklaim bahwa chiplet cache 64 MB L3 berukuran 6mm x 6mm, atau 36 mm2, dan dibangun di atas TSMC 7nm. Fakta bahwa itu dibangun di atas TSMC 7nm akan menjadi titik kritis di sini – Anda mungkin berpikir bahwa chiplet cache mungkin lebih cocok untuk node proses yang lebih murah. Pengorbanan dalam biaya adalah daya dan area mati (hasil pada ukuran cetakan sekecil itu tidak layak dipertimbangkan). Jika AMD membuat chiplet cache ini pada TSMC 7nm, maka itu berarti Zen 3 dengan cache tambahan membutuhkan 80,7 mm2 untuk chiplet Zen 3 seperti biasa, lalu 36 mm2 lagi untuk cache, secara efektif membutuhkan 45% lebih banyak silikon per prosesor. Meskipun saat ini kami sedang kekurangan silikon, ini mungkin berpengaruh pada berapa banyak prosesor yang tersedia untuk penggunaan yang lebih luas. Ini mungkin mengapa AMD mengatakan pertama kali melihat produk ‘high-end’.

Sekarang menambahkan 64 MB cache ke chip yang sudah memiliki 32 MB cache L3 tidak semudah kelihatannya. Jika AMD secara langsung mengintegrasikannya sebagai kedekatan ke cache L3, maka kami memiliki cache L3 tingkat ganda. Mungkin mengakses 64 MB itu membutuhkan lebih banyak daya, tetapi itu memberikan bandwidth yang lebih besar. Itu akan tergantung pada beban kerja jika 32 MB biasa sudah cukup, dibandingkan dengan 64 MB tambahan yang disediakan oleh cetakan bertumpuk. Kami dapat melihat 64 MB ekstra dilihat sebagai cache L4 yang setara, namun masalahnya di sini adalah bahwa agar 64 MB tambahan itu keluar ke memori utama, itu harus melewati chiplet utama di bawahnya. Itu adalah penarikan daya tambahan yang perlu diperhatikan. Saya sangat tertarik untuk melihat bagaimana profil memori dari perspektif inti keluar dengan chiplet tambahan ini, dan bagaimana AMD mengintegrasikannya ke dalam struktur. AMD telah menyatakan bahwa ini adalah desain berbasis SRAM, jadi sayangnya itu bukan sesuatu yang mewah seperti memori persisten, yang akan menjadi etos desain yang sama sekali berbeda. Dengan tetap berpegang pada SRAM, itu berarti setidaknya itu dapat memberikan peningkatan kinerja dengan mulus.

Pada kinerja, kami telah melihat kedalaman cache L3 meningkatkan kinerja game, baik untuk game terpisah dan terintegrasi. Namun, peningkatan kedalaman cache L3 tidak banyak membantu kinerja. Ini paling baik dicontohkan dalam tinjauan kami tentang prosesor Broadwell Intel, dengan 128 MB cache L4 (~ 77 mm2 pada Intel 22nm), di mana cache tambahan hanya meningkatkan pengujian game dan kompresi / dekompresi. Menarik untuk melihat bagaimana AMD memasarkan teknologi di luar game.

Akhirnya, intersepsi ke arus utama – AMD mengatakan bahwa mereka siap untuk mulai mengintegrasikan teknologi ke dalam portofolio kelas atas dengan produksi pada akhir tahun. AMD mengatakan bahwa Zen 4 pada peluncuran 5nm adalah pada tahun 2022. Berdasarkan skala waktu sebelumnya, kami telah memperkirakan bahwa keluarga prosesor AMD berikutnya kira-kira adalah peluncuran Februari 2022. Apakah itu Zen 4, masih belum jelas pada saat ini, tetapi juga Zen 4 pada 5nm dan AMD menampilkan 3D V-Cache pada 7nm. Apakah AMD berencana untuk memonetisasi fitur ini pada 7nm, atau apakah itu mungkin menggabungkan chiplet Zen 4 5nm dengan chiplet cache 7nm 64 MB masih belum jelas. Tidak akan terlalu sulit untuk menggabungkan keduanya, namun saya menduga AMD mungkin ingin mendorong teknologi caching-nya ke produk yang lebih premium daripada desktop Ryzen. Kita mungkin melihat edisi khusus satu kali saat teknologinya meningkat pesat.

Sebagai kesimpulan, saya memiliki sejumlah pertanyaan yang ingin saya tanyakan pada AMD. Saya berharap mendapatkan beberapa jawaban, dan jika saya melakukannya, saya akan mengulang kembali dengan detailnya.