lwneal 44 days ago [-]
> Today’s state-of-the-art is five-nanometre chips (though “5nm” no longer refers to the actual size of transistors as earlier generations did).

It's refreshing to see this mentioned. I'm no semiconductor expert, but it seems weird to me that although node size is measured in a physical unit, nanometers, it does not correspond to any real measurement that exists [1]. Each transistor in a 5nm chip is actually between 28 and 36 nanometers in width. It's called 5nm because of a theoretical calculation based on transistor density [2].

If Tesla advertised a new "200kWh" battery, I would be very disappointed if it turned out that the battery only held 95kWh, but the marketing department had decided that improvements in the charging network have made it "like 200kWh" compared to earlier models.

[1] https://en.wikipedia.org/wiki/5_nm_process

[2] https://en.wikichip.org/wiki/technology_node#Meaning_lost

nyunai 44 days ago [-]
Interestingly, there is one dimension that still tracks the node number fairly closely.

In fact, the node number has never referred to transistor dimension, but the length of the gate electrode. A transistor would typically be about 4 times larger than that.

The reason to refer to the length of the gate electrode is because that was always the smallest feature printed on the chip. And that is what defined the required resolution of the lithography process.

So, a lithography process with a resolution of 32nm would be able to print chips with 32nm gate lengths.

Much has changed, and now transistors are no longer planar, they are 3-dimensional fin structures (FinFETs). The gate length scaling has slowed down, and now is not the smallest feature on the chip anymore.

The smallest feature in a modern FinFET process is actually the with of those 'fi s' that form the transistor channels.

And as if by magic, they correspond pretty well to the node names, i.e. a 5nm process will have a fin width of about 5nm.

You will likely not find that mentioned anywhere, it's just a fun fact that I noticed as a VLSI technologist.

kayson 44 days ago [-]
It's still a little disingenuous because most performance parameters are directly related to transistor lengths, whether finfet or not. There used to be a concept of "scaling" where you could roughly estimate that going from a 40nm process to a 28nm process, for example, would save you (28/40)^2 percent power. Finfet disrupted that to some extent, but it partly held true as physical lengths actually continued to decrease with node names. By now, though, it is generally understood that state-of-the-art process node numbers doesn't mean much in that regard.

I also just checked a handful of processes I have access to, and at least in these cases, the fin width doesn't scale with the technology name (e.g. 5nm and 7nm has 8nm fin width drawn in layout, and model indicates 27nm "width" per fin). This isn't much of a surprise to me, because foundries are just decreasing the number with newer generations of the tech, even if the lithography (the method by which features are etched into the silicon) hasn't significantly changed. For example, 3 generations of a single process were called 7***, then 5***, then 4***.

dnautics 44 days ago [-]
> It's still a little disingenuous because most performance parameters

That's your bias. To a process engineer the parameter that matters is what wavelength you are blasting the mask with and what size a feature it can create.

lumost 44 days ago [-]
This is probably similar to the debate between management and engineers at Intel. As an outside observer who has never been near CPU design let alone process engineering - TSMC's nomenclature smacks of moving the goal post to claim a win. Despite the reality that from a process engineering standpoint decoupling the "transistor size" from the production node is an improvement and allowed TSMC to ship a technically superior process.
Guthur 44 days ago [-]
And also does it actually matter. It ultimately refers to some improved denser semiconductor process which is what the customer is buying.

These processes are not exactly standardised either it would be a major undertaking to take an architecture from TSMC 7nm to 5nm let alone moving from TSMC to Samsung, Global Foundaries or if at all possible Intel.

hinkley 44 days ago [-]
So essentially the 'node' size is the lithography process, not the chips you design with that process. You can't make a feature smaller than the lithography process, nor x.5 times as big as the resolution of the lithography. There will then be a bunch of parts of the chip that are just a little bigger or a little farther apart than they need to be.

Not unlike how there are pipeline phases that are just a little longer than they theoretically could be (because all of their neighbors have to line up with each other). If you can fix the bottleneck then everything else can shine, causing a magical, outsized improvement.

not2b 44 days ago [-]
The node size never referred to the actual size of the transistors. Back in the days when CMOS scaling was simple, it referred to the half-pitch, half of the center-to-center distance of the Metal 1 lines. The gate length was roughly the same, usually. But scaling had to be changed starting in roughly 2003 because atoms don't scale; the joke was that transistors were no longer switches, but just dimmers, because leakage was an increasing problem. Designs could no longer be simply scaled, producing a faster clock cycle for free; they had to be revised, and a FinFET looks very different from a traditional FET.

To make further progress designs had to be changed, and things have gotten fuzzier. Processes from different foundries have different densities. Intel's process for a given node is denser than that of the big foundries like TSMC.

See https://en.wikichip.org/wiki/technology_node for more on this.

jjoonathan 44 days ago [-]
On the plus side, MT/mm^2 seems to be gaining steam as a metric. It removes one dimension of cheating and has the whole "bigger is better" thing going for it.
acidbaseextract 44 days ago [-]
A nice approach I'm hoping will gain traction when discussing density across different types of logic is the "LMC" metric, as put forward by some TSMC and Stanford folks: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9063714

Improved semiconductor device density directly translates into benefits for more advanced computing systems— the primary driver for progress in semiconductor technology. Thus, we propose the use of the following three-part number as a metric to gauge advancement of future semiconductor technologies: [DL, DM , DC ], where DL is the density of logic transistors (in #/mm2), DM is the bit density of main memory (currently the off-chip DRAM density, in #/mm2), and DC is the density of connections between the main memory and logic (in #/mm2). As an example, today’s leading edge technologies that are published in the literature [15]–[17] can be characterized by [38M, 383M, 12K]. As another example, 3-D stacking of multiple logic and memory dies can increase DL, DM, and DC .

jjoonathan 44 days ago [-]
I love the idea of having that information readily available, but I don't think it will ever be a banner spec. Three numbers is two too many and who wants "density" when you can have "MEGATRANSISTORS"? I know, I know, but marketing is what it is and I think it's better to lean into it.
acidbaseextract 44 days ago [-]
Of course, and I agree that marketing traction matters. I should have said this is an idle hope that some of these fancier metrics get published/leaked about TSMC/Samsung/Intel processes rather than that they become the metric. I do think MT/mm^2 or whatever is a strong candidate to become the thing. "megatransistors" is an exciting word.
phkahler 44 days ago [-]
>> "megatransistors" is an exciting word.

Not as cool as megaFonzies.

wombatpm 44 days ago [-]
So is that going to be M as 1000^2 or M as 1024^2.

(still bitter about the whole GB vs GiB think caused by marketing people)

MrPatan 44 days ago [-]
To be fair to marketing people, they were correct, weren't they?

Deciding that in your field you're going to redefine standard units and their abreviations is a bit lazy.

44 days ago [-]
elihu 44 days ago [-]
That's "mega-transistors" per mm^2?

Perhaps it would be better to go with "6502s per square millimeter" or something like that that would probably be a closer approximation of real-world density, since it accounts for wire routing and so forth.

SRAM density is sometimes used I think.

function_seven 44 days ago [-]
Not folksy enough. Just like we use Libraries of Congress to measure information volume, we should rate our chips in terms of AGC/mm^2

(Apollo Guidance Computer)

smnrchrds 44 days ago [-]
Well, I would be just glad that at least it's not AGC/sq in
amelius 44 days ago [-]
Today's chips have many more layers of interconnect than the 6502. Also the aspect ratios of transistors are different.
bigmattystyles 44 days ago [-]
Isn’t that still a very bad descriptor though, like relying only on clock speed to make a decision? Though I guess many people do; back in the day, I think AMD cpus were on par with intel CPU’s running 200MHZ faster. But all people saw was AMD 1.7Ghz and intel 1.9Ghz. The result, the AMD chips at 1.7Ghz were called Athlon 1900. Which is the same thing as the equivalent feature length for FinFet. The people in the know should actively be discouraging reliance on single dimension (frankly misleading) terms.
5d749d7da7d5 44 days ago [-]
Should that be future-proofed and be per mm^3? I was under the impression that designs are becoming more three dimensional.
Diggsey 44 days ago [-]
According to the Bekenstein bound, the maximum entropy in a given region of space is proportional to the area of a surface containing that space, not its volume so to really future-proof it, we should probably go with per mm^2...
jjoonathan 44 days ago [-]
Maybe for memory, but I think it's too early for logic where heat transfer forces you to keep volume per surface area under control. Legitimate 3D logic will require a radically parallel compute model, probably something neural, and until we see what form that takes I think it's better to keep the metric 2D.
44 days ago [-]
analog31 44 days ago [-]
Similar story with audio amplifiers and "Watts." Also, I think that small gas engines like lawnmowers went through some kind of scandal related to their horsepower ratings.
amluto 44 days ago [-]
How about vacuum cleaners with HP? I guarantee that there isn’t a 3HP motor that works on a 120VAC (RMS) 15A (RMS) circuit.
m-ee 44 days ago [-]
A friend worked on a blender. The wattage number was derived from the inrush current when first turned on.
amluto 44 days ago [-]
I’m glad I don’t have that blender. :) (I’m the proud owner of a blender with a bona fide brushless motor. I don’t know specifically what type, but I suspect the inrush current is quite low. It also outputs considerably more power than most brushed motors can probably manage for any length of time.)

edit: The product blurb says it’s an induction motor. The nameplate says 1000W. It would be fairly straightforward to estimate the efficiency by blending a known amount of water and measuring the temperature change over some period of time.

namibj 44 days ago [-]
To be fair, that might be the current draw in stall.
tesseract 44 days ago [-]
They are basically the same quantity - the motor doesn't care if the rotor is stopped because it's physically locked in place, or just because it was previously not running.
laurent92 44 days ago [-]
I don’t think so. Attempt the opposite experience: If you turn the motor, electricity comes out. So I don’t know what is the draw when stalling but I guess the draw must vary a lot even inside a north/south cycle.

In fact, the draw varies enough during the cycle that it’s a well known method in industrial ecology to put a condenser on a 60Hz motors when the supply is 50Hz. So there must be a visible cycle.

namibj 42 days ago [-]
The magnetic behavior of the windings can depend on this, though. A good example would be a toroidal transformer, which tends to have a high inrush current, even if the output is open circuit.
taneq 44 days ago [-]
Yeah I remember when they switched from RMS to PMPO. Overnight a big speaker system went from 50-100W to 2kW or something ridiculous.
klodolph 44 days ago [-]
Yes, some amplifiers would be advertised with nonsense like “watts RMS”. The concept of “watts RMS” makes no goddamn sense.
analog31 44 days ago [-]
Actually in the US, the term "Watts RMS" specifies that the measurement is done according to the FTC Amplifier Rule. Now that rule has its own pro's and con's of course, but at least it's pretty definite. With all of these things, there's a corresponding European rule as well.

Whether the measurement method is appropriate to your use of an amplifier is of course anybody's guess.

namibj 44 days ago [-]
Considering that the load impedance of audio amplifiers isn't always the same, it'd be a bad idea to measure output voltage in terms of gain compression points/clipping/max continuous.

I agree the term "RMS" is kinda miss-appropriated, but it's short and tends to understood as something reasonably close to what it's actually about.

s5300 44 days ago [-]
No, that's just shitty vendors being shitty.

Accurate RMS can easily be calculated and advertised. Check out Steve Meade and his tools/videos if you're unaware.

Most prominent brands just choose to market absolute bullshit numbers - people who know what they're actually after know how to find the brands who publish real numbers.

11thEarlOfMar 44 days ago [-]
I thought the 5nm referred to the smallest dimension in a FinFet device. Specifically, the fin: "In theory, the finFET hits its limit when the fin width reaches 5nm, which is close to where it is today." [0]


xxpor 44 days ago [-]
I feel that stuff like this has a long history in engineering. It's like how TVs are in the 55 in "class", or a 2x4 is actually something like 1.75x3.5 now.
wfleming 44 days ago [-]
2x4s are 2x4 before they're seasoned, though. The lumber is dried after dimensioning and shrinks as it loses moisture. But we still label it by the original dimensions because lumber is priced by the board foot (which is itself a weird archaic measure) of the lumber when it was cut.

For comparison, sheet goods aren't priced this way. A 3/4" sheet of plywood really is 3/4", because the plywood was manufactured that way and as a manufactured product there is no seasoning.

You're not being ripped off because a 2x4 is actually smaller. A sawmill had a tree, and they cut off a hunk of that tree, and they charge by how much wood they cut off. The dimensions of that piece of wood changed after they cut it (which in and of itself is a service they provide - it would be a hassle if you had to buy green lumber and dry it yourself).

highfreq 44 days ago [-]
I work in sawmill technology. Way back in history 2x4 were cut to 2" x 4". In a modern sawmill 2x4s are never cut to 2" x 4". They are cut to a thickness and width such that given saw deviation, and variable drying shrinkage nearly all boards will cleanly plane down to 1.5" x 3.5". The exact target dimensions will depend on how well the mill can control their saw deviations and the statistical range of shrinkage they expect for the wood they are cutting. Reducing dimension targets by controlling saw deviation and understanding drying shrinkage is a big part of sawmill efficiency and profitability.
wfleming 44 days ago [-]
Fair enough, thanks for those details. Out of curiosity, do you know roughly what the target dimensions usually are now a days? I'm curious how much the efficiency gains are.

It's interesting that lumber kind of looks like shrinkflation (what the comment I was originally responding to suggested), but it's more that technical improvements have allowed producing the same finished product with less input material. But we still label the stuff by the amount of input material it used to take for historical reasons, which at a glance looks like shrinkflation.

bombela 44 days ago [-]

And what's is worse is how hard it is to know the true size of everything you find in stores in the USA. The same applies to pipes of all types. But even things like a drawer slide is often not the advertised length. Saw blades thinkness is converted from 2.5mm to whatever the closest inch the salesperson felt that day.

From having spent quite a lot of time in big box stores in France and the USA. I can tell you that in the USA 3/4" (19mm) can be anything from 16mm to 24mm.

In short almost everything is built in metric and the USA likes to pretend it's not and gets confused by rounding up randomly.

peteradio 44 days ago [-]
2x4 in old houses will measure 2x4 and is "rough cut". Modern milling will turn rough cut true 2x4 into the clean cut "2x4" you use today by running them through additional planing process. I don't think any shrinkage is significant though.

3/4 plywood will measure less than 3/4.

HideousKojima 44 days ago [-]
3/4 inch plywood is usually actually about .7 inches
wfleming 44 days ago [-]
Does this vary more depending on the plywood grade, maybe? I have some shelving I made from AB birch, and it looks pretty close to .75. Maybe 1/64 scant. (Although I don't have calipers handy and just used a tape measure, so I'm not being super precise.)
spenczar5 44 days ago [-]
Lots of plywood, especially birch, is metric-sized. You might have 18mm plywood on your hands, which would be about 1/32 under 3/4 inches.
neogodless 44 days ago [-]
The plywood I often buy at a big box home store is labeled as 23/32" (0.71875" or 1/32" less than 3/4") and it is actually that thickness. I don't recall seeing any 3/4" labeled plywood though.
joshualross 44 days ago [-]
I don't think we should limit to just engineering; you see this stuff in media, politics, finances, statistics, etc. Anything that is not well understood by the public at large is ripe for misuse.
HPsquared 44 days ago [-]
It's not new: CRT TVs and monitors were marketed by the size of the tube, rather than the size of the visible part (the screen itself).
gumby 44 days ago [-]
"Building" (or worse "designing") a computer is assembling lego bricks. Serious language inflation.
taneq 44 days ago [-]
Yeah, these casuals think they're building a computer when they're really just sticking ICs together. If I was building my own computer, I'd do a real DIY build and start by creating the universe.
tedunangst 44 days ago [-]
And a 16oz iced coffee from Starbucks doesn't contain 16oz of liquid!
m463 44 days ago [-]
Funny, but Tesla actually stopped advertising battery sizes when the model 3 came out. It's actually a bit of work to find out (answer: 54 or 62 or 75 kWh)
taneq 44 days ago [-]
True, and this is precisely why they did it - this way they reap the benefit of any efficiency improvements. If their tech improves, it costs them less to build a "300 mile" car, rather than the buyer simply getting slightly more miles out of a 90kWh car.
emteycz 44 days ago [-]
Remember when Mercedes E230 meant 2.3 litre engine? I feel this is similar. Indeed a long history as another commenter points out.
noncoml 44 days ago [-]
Yup. Similar with BMW. 330 meant 3 liter engine. Now it’s all the “equivalent“ game.
mrlonglong 44 days ago [-]
Mine's a 325i, that actually has a 3L engine.
GoOnThenDoTell 44 days ago [-]
230 deci-litre?
44 days ago [-]
africanboy 44 days ago [-]
230 centiliter
blackrock 44 days ago [-]
> Each transistor in a 5nm chip is actually between 28 and 36 nanometers in width. It's called 5nm because of a theoretical calculation based on transistor density

So, TSMC is not really that far ahead after all? 28 nm is like a decade old now?

This sounds like women’s vanity sizing. Size 6 is now Size 0.

fctorial 44 days ago [-]
GHz would be a better analog of KWh. Anyone that isn't an electronics engineer is not qualified to talk about '?nm' technologies
cbozeman 44 days ago [-]
> Anyone that isn't an electronics engineer is not qualified to talk about '?nm' technologies

Jesus, this is the height of arrogance...

Wendell Wilson isn't qualified to talk about this?

Linus Sebastian isn't qualified to talk about this?

Dr. Ian Cutress isn't qualified to talk about this?

I would listen to all three of those before I listen to anything you say.

44 days ago [-]
amelius 44 days ago [-]
I can see how an average consumer would fall for "5nm". But this is targeted at engineers ...
Waterluvian 44 days ago [-]
This is similar to all the disk space advertising and perpetual need to explain to friends why their disk isn't actually 500MB -> 2TB over the years.
k__ 44 days ago [-]
If I look at HDD sizes and CPU "performance numbers", it doesn't surprise me that 5nm doesn't mean the actual transistor size.
44 days ago [-]
d58wfq7ps 44 days ago [-]
Why not it's what computer power supply have done for decades.

1500W PSU!!! But you'll never know how we came up with that figure!

detaro 44 days ago [-]
Outside of absolute no-name crap I haven't seen much of fake numbers on PSUs?
mywittyname 44 days ago [-]
> But the logical endpoint of the relentless rise in manufacturing costs is that, at some point, one company, in all likelihood TSMC, could be the last advanced fab standing.

Most established, capital-intensive markets are winner-take-all, where a few big players control the bulk of the market. And they usually remain at the top of the market until some fundamental paradigm shift occurs that causes the market to shrink.

TSMC will be top dog for a while, until something happens that causes the market for custom made silicon to collapse. Just like Intel was top dog until something came along to replace the market for x86. Or like Microsoft was King Dingaling until people moved to mobile computing.

heimatau 44 days ago [-]
> TSMC will be top dog for a while, until something happens that causes the market for custom made silicon to collapse.

Contrary to many popular technological opinions, Moore's Law will be dead for a while. We're nearly at our physical limits. When this is ceiling is hit, we'll start seeing numerous more competition enter the fray. This is because the research part will be too large for one company to innovate itself out of (TSMC will be the first to hit the ceiling and the competition will reach the same ceiling in a short time. Together, they may advance our scientific understanding but I think governments would need to get aggressively involved). This is afundamental scientific limitation. There is no emerging research that will overcome this problem (zero theoretical research that can be used for engineering the manufacturing needed for the next advancements).

3D transistors are next but that's not growth at an exponential curve. We're looking at a linear growth future for a while.

P.S. I'm 33 and I expect within the next 6-15 years that we'll hit this limit and be at this limit for most of my working career unless we start seeing some seriously massive investment to unearth more advanced physics that we can implement at the pico-level. If you disagree, I'd love to hear what you think that will allow Moore to continue because I've read numerous studies and essentially all have zero tangible implementations.

josephg 44 days ago [-]
As a software engineer this feels like heresy, but I can't wait for the day computers stop getting faster. Shoddy software design has been bailed out time and time again by Moore's law. I'm excited by the prospect of a software industry where "tell the client to buy a faster computer" is no longer an acceptable excuse.

For example, there's room for a clean, native cross-platform application toolkit that does everything that electron does. But we don't have that. Instead application developers use electron (trading my CPU & RAM for their time). And people get angry at Apple for "only" shipping 16 gigabytes of RAM in their computers rather than angry at lazy app developers. And we don't invest in the tooling we'd need to actually fix the problem. (In this case, Electron but small and native).

Another example - as an industry we know how to make fast compilers (eg Go, Jai, V8, LuaJIT, etc). But instead most new languages (rust, pony, zig, etc) are built on top of LLVM. LLVM used to be super fast - but now even in debug mode its sluggish.

There's usually nothing wrong with old computers & phones, but we throw them out anyway because software developers buy faster computers then take shortcuts.

alecco 44 days ago [-]
Bloated software creates demand in hardware. And demand drives supply. When CPU speeds stopped increasing they started adding more cores and created SMT. As Moore's Law is ending they start adding specialized coprocessors (M1). Circling back to the 80s.

All these fancy new hardware technologies are not aimed at particle physics, HFT, or fringe high performance shops. They are aimed at consumer grade shitty software and a whole industry who is driven by deadlines and non-technical managers.

The change in the software industry has to come from within.

vlovich123 44 days ago [-]
It will be “Moore” if you talk about “improving performance” (not that the law actually is) as vendors start integrating more and more custom silicon to accelerate various things (eg neural engines, more dsps, etc). So you may still see computers get faster and faster even though it’s not the CPU itself being the one getting faster. That being said cpu design isn’t dead. It’s just more expensive for companies than the traditional “well a new process node will fix it” like the work Apple did with the M1 (which they took advantage of the newer process mode to really put the nail in the coffin).
potamic 44 days ago [-]
That's what they've been saying for the last 10 years or so, but the last decade really saw a huge leap in terms of computing. Machine learning has taken off, big data operations have reached an unprecedented scale and the latest chips from apple have blown everything else out of the water by a mile. Surely there must be a lot of advancement at chip level for all this to happen?
heimatau 43 days ago [-]
> That's what they've been saying for the last 10 years or so, but the last decade really saw a huge leap in terms of computing.

Yes. I acknowledge the nay-sayers and the history of them on this topic

But to your point of ML being a solution. Honestly, I have a difficult time seeing how ML will give the manufacturing engineers enough tangible/actionable steps to take to make this happen. We need a new physical framework to be able implement an engineered solution. But maybe it's like eating an elephant (one bite at a time).

I will concede this though. IF it were to happen, ML will most likely bring it about. I appreciate you optimism grounded in the technology.

ksec 43 days ago [-]
> I expect within the next 6-15 years that we'll hit this limit

You are talking about fundamental physical limits? Um... No, [1]

"At about 26-28.5 minutes of the Jim Keller:

Jim notes that transistors could reach 10 atoms by 10 atoms by 10 atoms while avoiding quantum effects. People are also working on harnessing quantum effects."

We are so far away from fundamental limits. That even if we assume we could somehow double transistor density every two years, which we dont anymore, there are still at least another 15-20 years before we are close to that limit. And that is not accounting any 3D Transistors.

Realistically we already have Roadmap up to 2030 from TSMC. The only limit we will hit is that the node is too expensive and market could no longer afford the premium. Which You could expect to happen within next 10-15 years.

[1] https://youtu.be/Nb2tebYAaOA?t=1677

heimatau 43 days ago [-]
> Jim Keller

I watched a bunch of Jim Keller's interviews and I just fundamentally disagree with ~'we'll innovate our way out of it'.

ksec 42 days ago [-]
TSMC has a roadmap for next 10 years. Which is still far from reaching any technical roadblock.
ratsforhorses 44 days ago [-]
Dumb question, how about quantum computing, organic or parallel processing... sorry.. complete noob here but enjoying your high flying discussions that conflate so much stuff :-)
44 days ago [-]
Maven911 43 days ago [-]
Moore's law level of advancement has been considered no longer keeping pace since 2010.
singhrac 44 days ago [-]
This is too aggressive a take. Yes, TSMC is 50% of all EUV production right now. But it's only 50%. Samsung has huge resources and is on track to build a 3nm process by 2022/2023, which is not very far behind TSMC (though I don't know if their 3nm processes compare).

Besides just diversifying global chip production, Samsung has the (geopolitical) benefit of being in a country with large US bases and plenty of internal demand for their chips (if the Exynos designers can produce something exceptional).

NonEUCitizen 44 days ago [-]
The internal demand makes Samsung a competitor with its customers. TSMC does not have a similar conflict of interest.
noizejoy 44 days ago [-]
... and IBM was King Dingaling until people moved to desktop computing

p.s. and thus we have doxxed ourselves just a little bit: age group :-)

2sk21 44 days ago [-]
You're correct - at one point in the late 70s or even the early 80s, IBM was the biggest chip manufacturer in the world but all of the produced chips were used in their own products.
noizejoy 44 days ago [-]
there was even a kind of "cloud computing": "time share computing"[0], since mainframe computers were financially our of reach for anyone but the very largest companies.

[0] https://en.wikipedia.org/wiki/Time-sharing

amelius 44 days ago [-]
> but all of the produced chips were used in their own products.

Sounds like something Apple would do.

2sk21 44 days ago [-]
IBM was _the_ prototypical vertically oriented company back in the late 70s before the PC - manufacture chips, build boards, build computers ranging from desktop to mainframes, develop the OS, provide finance for customers, offer integration services. Apple is nothing compared to IBM in its heyday.

When I joined IBM in 1991, it was still a very inwardly focused company. Although it was clear that things were going to be changing rapidly in the industry, it was hard to get employees to understand how precarious things were - until the big layoffs of the mid 90s.

noizejoy 44 days ago [-]
... and Tim Cook spent 12 years with IBM around the time when you joined.
ianai 44 days ago [-]
Or a third party decides to enter the market and go for position number 2. Apple could probably do it with the cash they roll - but they don't want to do so. A country could. It sure seems like chips are important enough to warrant a national supply - so international pressures can't weigh on you politically through chip supply.
na85 44 days ago [-]
I've long held the opinion that chip making will eventually go the route of uranium refining and become national strategic capabilities that all great powers will pursue. I'm continually surprised that this doesn't seem to be taking place outside th defence sector.
mywittyname 44 days ago [-]
This is the case, actually. It's just this article is talking about chip production at the bleeding edge instead of the pedestrian chips used in the bulk of electronic devices.

The boring chips made for use in American military equipment are largely produced domestically.

baybal2 44 days ago [-]
Unfortunately not. Maybe the "family jewels" level secret ICs which we don't even know about, but at least ICs for munitions been all Taiwanese for at least 20 years.
ianai 44 days ago [-]
Honestly I’d blame governments being staffed by people who make their money in more traditional areas. Aka we need a generational change.
bluGill 44 days ago [-]
Chip making is still advancing too fast for that. If the current processes are about the limit of physics, then eventually yes. Right now it is better to hope your country just does the next leapfrog of technology.
ianai 44 days ago [-]
The TSMC model seems appropriate: Don't specialize in the design. Focus on the hardware and take designs from customers.
simonh 44 days ago [-]
I don't think Apple will get into chips for several reasons, but the main one is sunsetting old fabs. Apple wants only the best and shiniest process for their chips, but any fab inevitably gets overtaken by new technology. What to do with the ageing fabs? Apple would have no use for them, so long term it makes no sense. Better to capitalise TSMC with long term contracts and lock in access to capacity on the best nodes.
peter_d_sherman 44 days ago [-]
>"On january 13th Honda, a Japanese carmaker, said it had to shut its factory in Swindon, a town in southern England, for a while. Not because of Brexit, or workers sick with covid-19. The reason was a shortage of microchips. Other car firms are suffering, too. Volkswagen, which produces more vehicles than any other firm, has said it will make 100,000 fewer this quarter as a result. Like just about everything else these days—from banks to combine harvesters—cars cannot run without computers."


>"While car enthusiasts the world over are worried about assembly plant closures following the earthquake that ravaged Japan, many are still unaware of the significant role played by companies working at the start of the colossal logistical chain that results in the production of a vehicle.

Did you know that a single vehicle uses from 30 to over 100 chips

to control things like the parking brake, stereo, power steering and safety systems such as stability control? Development of these components is extremely complex, and only a handful of companies are able to meet the demands of the world’s automotive giants."

PDS: The world's automobile manufacturers, that is,

the world's carmakers --

would, or should have, a collective interest in IC/Chip fabrication -- especially in light of the most recent shortage...

ahepp 44 days ago [-]
According to the news sources I've been reading[0], this is a management failure by auto companies rather than some kind of structural issue with the chip industry. So it doesn't really make sense to me that automakers might build their own fab. They're the problem, not the fab. They're crying to the newspapers because it looks bad to admit they were cheap and will have to furlough workers. Politicians are playing along with the shortage angle, because when you save a worker's job they tend to vote for you. I predict the end result will be politicians spending taxpayer money to make filling these orders worth the fab's time.

>Mr Duesmann described the problems as “a crisis upon a crisis”. Demand for cars slumped for much of last year because of the coronavirus pandemic, prompting auto suppliers to cut their orders for the computer chips that manage everything from a car’s brakes and steering to its electric windows and distance sensors.

>But demand for cars jumped unexpectedly in the final three months of 2020, as buyers became more optimistic. Audi had its best quarter ever, largely because of a rebound in China.

You say that development of the chips in cars is extremely complex, but I was under the impression car chips are generally outdated, semi-rugged processors that might have some additional safety features like lockstep cores. The control systems algorithms for the functions you describe aren't particularly complicated as far as I know.

Auto manufacturers are infamous for cost cutting. I assume selling them chips is a low margin high volume business, in which case I would be glad to replace their orders with higher margin ones at the earliest opportunity.

[0] https://on.ft.com/39V7w9h (paywalled after 3 free accesses)

ksec 43 days ago [-]

Most of the Chips in Cars are on 28nm if not older. And 28nm Fab Demand has been outstripping supply for quite a while. So 28nm products are getting much more expensive and some customers simply refuse to accept the new price. And when they do, Vendors said it is too late, orders are already taken.

And they just love to blame it on TSMC.

ahepp 44 days ago [-]
(if anyone wants to read the article but can't, feel free to comment and I'll reply with a new link)
ZirianDrake 44 days ago [-]
Are electric vehicles cheaper to manufacture (ignoring self-driving and batteries), including needing fewer computer chips, than fossil-fuelled vehicles? I would imagine more moving parts, plus a complex engine, fuel and exhaust system, would need more monitoring and controlling, and thus more chips.
trhway 44 days ago [-]
>Did you know that a single vehicle uses from 30 to over 100 chips

trying to sell extended warranty the dealer made sure to repeat that again and again ( i still didn't buy and as of now, 5.5 years later, no chip nor anything else has failed so far :)

kaonwarb 44 days ago [-]
Reminds me of the time a dealer tried to convince me with "math" that I was guaranteed to save money by buying the extended warranty. I asked if that meant they were guaranteed to lose money, and if so, why they were trying so hard to sell it to me. They moved on pretty quick from that.
jonathanlydall 44 days ago [-]
It’s really a kind of insurance, and like insurance it’s a gamble. On average dealerships must win, or they would’t offer it, but that’s not to say that some customers don’t “win” if they got unlucky with their car.

That being said, I opted out of it with my last car.

gumby 44 days ago [-]
I did buy an extended warranty for one car, never had a warranty claim until it broke down with only about three or four months left on the warranty. That one paid off.

In another car (same mfr) I looked into getting one when the regular warranty was about to run out. Sales guy looked and said: main thing that happens to these older cars is X and Y and you already replaced X so it's not worth it. I always wondered if what he really meant was "main thing that happens is something expensive and I'd rather get paid retail to fix that". Either way I skipped it and never had a problem.

I do consider it insurance.

byset 44 days ago [-]
Maybe he was just being honest!
phkahler 44 days ago [-]
The guys car was out of warranty. Be nice and maybe he'll buy his next car from you. Makes some sense.
gumby 44 days ago [-]
An employee of a car dealership? Impossible!
simonh 44 days ago [-]
To be fair to the dealer, it might still make sense for him to sell you the warranty even if they would be likely to lose money on average. For starters the salesman himself gets commission on selling the warranty, but he might not get anything on service and repair. Also warranties are cash in hand that you can amortise over the warranty period, so it's effectively a steady revenue stream rather than bursty fits and starts. That enables more efficient financial planning.
OldHand2018 44 days ago [-]
I used to work at an automotive electronics supplier back in the 1990s. The “typical” mainstream car crossed the 30-chip over 20 years ago. It would have had more than 10 in the early 1980s.

Most of this stuff is extremely reliable!

phkahler 44 days ago [-]
>> Most of this stuff is extremely reliable!

Automotive grade chips are probably second only to aerospace parts.

other_herbert 44 days ago [-]
I think I remember reading that auto parts included in the original vehicle have a minimum designed life of 10 years.. that includes the radio and related display components
samus 41 days ago [-]
Chip fabs are one of the most capital-intensive and brainpower-intensive industries that there are. At the same time it is motivated by tight margins. A short term supply shortage is not going to motivate carmakers to enter this entirely different market. It might happen on a national or supranational level though.
llcoolv 44 days ago [-]
I don't really believe that such thing as a shortage exists in a free-market economy. Executives who complain of shortages are just way too rigid with their planning and are not willing to pay the increased price. Exactly this goes for VW.
npunt 44 days ago [-]
Your faith in free markets may be a bit misplaced. There's a long history of semiconductor shortages followed by markets being flooded due to lack of demand predictability and the capital costs and lag time of scaling manufacturing. DRAM manufacturers have several times illegally colluded to avoid these circumstances (and thus maintain profit margin).
jariel 44 days ago [-]
Structural barriers are 'real things' even in free market economies. The size, complexity, barriers to entry, massive government subsidies, geopolitics are all part of the equation at that level. If this was about 'wheat' or 'shoes' then yes, but at this level the equation is different.

Also, these execs are not dumb, if they could just 'adjust their price and sell 100K more cars' ... well, they've thought of that.

Probably what needs to happen is Merkel, Bojo, EU leadership (and same in other countries) might need to pipe in with something to facilitate the economy, but that's also fraught with risks, it's not like Norway Statoil whereby they just have to 'get the Oil from under the sea and it's bank'.

zarkov99 44 days ago [-]
I don't think that is right in the short term, which is the time horizon the article discusses. Eventually, baring an actual physical resource constraint, the market will take care of it but in the short term something will have to give.
bluGill 44 days ago [-]
Car makers are big, but to a chip maker they are a tiny customer. I suspect more than one car maker has considered making their own fab, but the costs are just too high. (My company has in fact done that, we didn't open a fab last year because of the costs - I'm not sure that was the right decision given the costs we have spent porting working software to the replacement instead, and we have a fraction of the volume of the big auto makers)

Manufacturing is a core competency of any car maker (second to supply chain). I wouldn't be surprised to see a fab partner ship between automakers in the future just to ensure they can get the chips they need. This will be at least as much about ensuring old chips don't go obsolete as about supply.

IndrekR 44 days ago [-]
Opening own fab probably does not make sense for passenger cars. It may be reasonable for high volume long lifetime speciality vehicle producers (John Deere, etc). That will not be 'a 3nm fab', however.
ahepp 44 days ago [-]
Auto companies cut the orders because they anticipated falling demand[0]

>Mr Duesmann described the problems as “a crisis upon a crisis”. Demand for cars slumped for much of last year because of the coronavirus pandemic, prompting auto suppliers to cut their orders for the computer chips that manage everything from a car’s brakes and steering to its electric windows and distance sensors.

>But demand for cars jumped unexpectedly in the final three months of 2020, as buyers became more optimistic. Audi had its best quarter ever, largely because of a rebound in China.

[0] https://on.ft.com/39V7w9h (paywalled after 3 free accesses)

kingosticks 44 days ago [-]
> And whereas designing your own chips once meant having to make them as well, that is no longer true.

This hasn't been true for what, 15-20 years? Longer? If you had a digital design and enough cash you could have gone to one of many vendors, who in turn worked with an external fab such as TSMC (or in other cases, the fab division of the same company) to create your design. When working with a vendor like Broadcom, Agere, Toshiba, IBM, Intel, Marvel you can be entirely isolated from the physical aspects of making chips, if that's what you want. What has happened over the last few years is massive consolidation of these vendors so now the options are far more limited. It's basically just Broadcom or Marvel at the cutting edge. I don't think this goes anywhere to explain why more companies are designing their own chips but it's a more accurate description of reality.

klelatti 44 days ago [-]
35 years? The original ARM1 was manufactured under contract by VLSI.
gautamcgoel 44 days ago [-]
Suppose I had $100M to blow. Could I build a fab in the USA that produced chips at, say, 65nm? I understand that a cutting-edge node like 7nm would be out of reach at that price point, but I'm curious what exactly is in reach.
meekrohprocess 44 days ago [-]
Good question! I'd also be curious as to the answer.

But I suspect that it is probably "no". Because it looks like ST was planning to pay ~$1.8B per 300nm fab in 2017.


I assume that "mm" is a typo in the article, but that cost was for a >15-year-old process node. 65nm for $100M might be a stretch, if that is accurate.

patrickyeon 44 days ago [-]
That 300mm is a different measurement. It's not a typo and they're refering to the size of the wafers, in this case 300mm diameter (very close to 12 inches).
dhdc 44 days ago [-]
Yes. In addition, wafer size is another important metric of fab capabilities aside from process node, since it roughly translates into production rate.
gautamcgoel 44 days ago [-]
So... Does that mean 65nm at $100M is possible after all?
patrickyeon 42 days ago [-]
This is coming in late, but the short answer is "I have no idea".

The slightly longer answer is "I don't know. But once you have the space and equipment and you're sure you can handle everything safely you still need to staff this and build up some institutional knowledge." If you're thinking this far, you should ask why 65nm? Because it's a process node you know a certain microprocessor was built on? It's just a benchmark that's useful for working out your sense of what it costs to do these from scratch?

The cheapest way to get the equipment would be buying out a fab that's shutting down. When I was in undergrad, I got to play in a 5 micron (5000nm) fab on campus because the previous owner of all the equipment gifted it to the university instead of scrapping it. My gut sense is that there's nothing at 65nm that's unprofitable yet, so you're not getting cheap stuff.

(a 6 micron node was commercialized in 1974, this university lab was opened in 1993. 65nm was commercialized in 2005. While Moore's Law held, more or less, through much of 1974-2005, I think we can say that the difficulties and necessary capital investments increased super-linearly over that time. And 20 year old manufacturing tech seems much more useful now than it did in '93. The STM32 family of microcontrollers, a very strong line, is spread over the 132-40nm range of process nodes.)

meekrohprocess 43 days ago [-]
Interesting, thanks for the correction.
neonate 44 days ago [-]
11thEarlOfMar 44 days ago [-]
I tried to find some numbers on the relative amount of chips in a standard ICE car vs. a BEV. I'd have to guess BEVs have perhaps even an order of magnitude more chips. Certainly, they have chips that are far more advanced, looking at the Tesla FSD computer[0].

The chip industry, and the chip equipment industry are still capacity limited due to demands of remote work and school. That may level off and revert somewhat towards the end of the year. BEV sales may take up the slack, but also may demand a somewhat different chip supply chain and drive demand for different segments in the industry. We may well see the typical crash in equipment sales in 2022 as those two market drivers transpire.

[0] https://www.cnet.com/roadshow/news/tesla-fsd-computer-retrof...

astrange 44 days ago [-]
> I tried to find some numbers on the relative amount of chips in a standard ICE car vs. a BEV. I'd have to guess BEVs have perhaps even an order of magnitude more chips. Certainly, they have chips that are far more advanced, looking at the Tesla FSD computer[0].

Well, you could have FSD in an ICE car just as easily.

In fact, I'm going to guess an electric vehicle has fewer chips than an ICE car because it doesn't have an engine. Depends how many PMICs you need for the battery cells.

samus 41 days ago [-]
It's also highly debatable whether you need chips made with the latest process nodes for these purposes. Few cars or trains have to be machine-learning monsters. Also, power electronics, used by all kinds of vehicles, is perfectly fine using larger, cheaper and more rugged nodes.
Ericson2314 44 days ago [-]
How much of the cost is in the design of the fab factory vs its physical manufacture?

My hunch is now that chip design and chip manufacturing are separate, the cost of chip design is going to be revealed as a lot lower because the opaque accounting of before allowed for stupid inefficiencies.

But in this case, I have no idea whether funding the basic materials engineering, scaling it up, or building the scaled up design, is the hard part.

hctaw 44 days ago [-]
The two can't be separated at birth. Design is an iterative process that requires feedback loops with the manufacturing teams (including testing and packaging, which may or may not be a part of the fab).

Just an example, the design has to be altered to maximize yields. You don't know what the yields are like or what to alter in the design until you make a few chips and test them, which means you need to design the test harnesses, tape out a prototype, and get both to where they need to be. And then try again.

If the testing, fab, and design people/materials are in different places you have to move things and people around as a part of the process.

It's just an expensive endeavor that's difficult conceptually and logistically, with a lot of institutional knowledge required.

Ericson2314 44 days ago [-]
I'm not necessary agreeing with that. Chip fab, jet engine fab, etc. are all at the limits of what Capitalism can handle, as the geopolitics, protectionism, and cronyism attests. The solution is less than clear:

- Because of the enormous costs of duplicated effort. I'm not sure anti-trust will work.

- Nationalist duplication might, but I am not sure whether it will hinder or hasten WWIII.

- Radical IP disembargo with institutional cross pollination works better for codified then opaque-institutional knowledge.

What is nice about vertical disintegration is is forces the the institutional knowledge to by codified. So maybe things can work in tandem, too.

adityar 44 days ago [-]
nindalf 44 days ago [-]
Strange. Copying all of the content - is what they're doing legal?
mmastrac 44 days ago [-]
IANAL, obviously, but it's probably not.. I don't see this as being very transformative. You'd probably have a tough time convincing a judge.
curiousllama 44 days ago [-]
"It's not a lockpick, your honor. All it does is interact with the pins inside of certain types of locks to grant entry. Just like a key. They basically left the door open"
worker767424 44 days ago [-]
Like Elon Musk's definitely not a flamethrower.


throwaway09223 44 days ago [-]
They haven't copied any content. Outline is a javascript app that changes the way your browser renders the website.

It's your browser contacting economist.com and fetching the webpage. The javascript app then changes how it is rendered by removing stuff most people don't want to see.

Outline is available as a browser extension, if you don't want to load it by visiting outline.com.

crazygringo 44 days ago [-]
That's not true. It doesn't load the content through your browser, it loads it through its servers. They indeed copy content, and it's up to you to develop your own opinions as to the legality or ethicality of that. :)
throwaway09223 44 days ago [-]
Huh, I stand corrected. I should've looked more closely before commenting.
trhway 44 days ago [-]
>It's your browser contacting economist.com and fetching the webpage

chrome devtools->network tells different story.

ThisIsTheWay 44 days ago [-]
You could see the same with a quick Ctrl+U, Outline is just nicely formatted.
gitowiec 44 days ago [-]
Yes, we put chips in everything, and if this occurs https://youtu.be/hESunUuFrzk what then?
intricatedetail 44 days ago [-]
How difficult is to recreate e.g. 100nm litography diy?
abdullahkhalids 44 days ago [-]
These guys are making headway https://libresilicon.com/
samus 41 days ago [-]
Could you find out which node size? I could find a reference to 1 micrometer. It's a good start, and probably more than enough for many purposes, but way to go...
namibj 44 days ago [-]
Very. 350 nm is about the limit of what [0]'s DIY process should be scalable to with somewhat more precision outside of the photolithography itself, as well as clean room conditions.

[0]: http://sam.zeloof.xyz/

Koshkin 44 days ago [-]
Looks like we at a point right now at which we can make a vacuum tube:


ivanstame 44 days ago [-]
This article is hidden behind a paywall.
fumblebee 44 days ago [-]
ivanstame 43 days ago [-]
I didn't know about this service. Thanks a lot.
lincpa 44 days ago [-]
Forecast(2021-01-19): I think Intel, AMD, ARM, supercomputing, etc. will adopt the "warehouse/workshop model"

In the past, the performance of the CPU played a decisive role in the performance of the computer. There were few CPU cores and the number and types of peripherals. Therefore, the CPU became the center of the computer hardware architecture.

Now, with more and more CPU and GPU cores, and the number and types of peripherals, the communication, coordination, and management of cores (or components, peripherals) have become more and more important, They become a key factor in computer performance.

The core views of management science and computer science are the same: Use all available resources to complete the goal with the highest efficiency. It is the best field of management science to accomplish production goals through communication, coordination, and management of various available resources. The most effective, reliable, and absolutely mainstream way is the "warehouse/workshop model".

So I think Intel, AMD, ARM, supercomputing, etc. will adopt the "warehouse/workshop model", which is an inevitable trend in the development of computer hardware.


anigbrowl 44 days ago [-]
You don't need to put the name of the publication in the title, HN does it automatically.
dang 44 days ago [-]
Removed now. Thanks! Submitted title was 'Economist: Chipmaking Is Being Redesigned'.

This is in the HN guidelines actually ("If the title includes the name of the site, please take it out, because the site name will be displayed after the link.")


klelatti 44 days ago [-]
Thanks and apologies! If you submit once in a blue moon then it's easy to forget.