I don't know what you're talking about. They were plane from what I read.Their claims seem a little thin.
I'm not sure what this sentence is supposed to mean. We still have a bulk material (i.e. not a molecule), even if it is only a single layer. The bulk properties will be different than for 3D (e.g. phonon hydrodynamics in two-dimensional materials), but not non-existent, and thus can also effect the electronic properties.In any case, the electronic properties of these materials are strictly a product of the orbital configurations of the molecule itself—there is no bulk material from which bulk properties can emerge.
Your wordplay is falling flat.I don't know what you're talking about. They were plane from what I read.
Given enough memory it should be possible to run Linux on any turing machine, I did a bit of searching and there's no clear consensus on the number of transistors required to make such a machine, but the numbers range from single digits to a couple hundred.I also TIL another semi-related thing; I didn't know RISC-V 32-bit could be implemented with under 6K transistors. Native Linux executed on under 6K transistors. That's pretty cool.
Oh yes, the Linux 4004 Project [Ars]! It was, interestingly, emulating RISC 32-bit instruction set to boot Linux in 4.7 days on 1970 silicon! I actually was aware so that's why I said "Native Linux" to disambiguate.Given enough memory it should be possible to run Linux on any turing machine, I did a bit of searching and there's no clear consensus on the number of transistors required to make such a machine, but the numbers range from single digits to a couple hundred.
It could require many years of operation to get the first result, perhaps then we will know.I'm just imagining what a Beowulf cluster of these could accomplish....
Since any 1 bit computer (even relay based) can emulate any word sized modern computer, it's really down to 1 bit microcode ROM size vs 1 bit hardware tradeoffs. Even 1 bit hardware can efficiently emulate 64 bit architectures using hardware and minimal microcode, but VERY slowly in either case. This is fundamental to all Turing complete machines.~2300 versus ~5900 transistors for 4-bit versus 32-bit native. I didn't know that was that tiny a difference.
It seems much too small to me. I was thinking there had to be a mistake.I also TIL another semi-related thing; I didn't know RISC-V 32-bit could be implemented with under 6K transistors. Native Linux executed on under 6K transistors. That's pretty cool.
The original RISC-I had something like 44K transistors [Wikipedia]!It seems much too small to me. I was thinking there had to be a mistake.
It's so old, it pre-dates calling things memes....
I guess if the sensor output is for local consumption (say for spacecraft navigation) that might be useful but I assume there are weird applications where a ultrathin sensor layer can be helpful that generates little heat like say between 2 chips stacked or in a display where thinness is the more important feature. But yeah remote sensors that then transmit the data sort of ruin the savings unless a summary can massively compress the data transmitted.What is the benefit of this material/approach over “traditional” ones? The end of the article implies that at low clock speeds it is much lower power than silicon - is there any info available offering comparisons? I could see utility for (slow) ultra-low power devices like remote sensors, but my experience with such is that issues with access to the sensor data (power and other communications limitations) outweighs much of purported benefits (and we had very low power silicon based sensors).
Makes you wonder how it would run doesn't it? Just from a mfg point of view kind of amazing isn't it?I also TIL another semi-related thing; I didn't know RISC-V 32-bit could be implemented with under 6K transistors. Native Linux executed on under 6K transistors. That's pretty cool.
Yeah that doesn't sound right, the register file alone would eat up that budget. I don't particularly want to pay for the paper but I think this is a clue: "This also required on-board buffers to store the intermediate results." My guess is that they've moved a lot off-chip that would normally be on-chip even in early microprocessors.It seems much too small to me. I was thinking there had to be a mistake.
EDIT: Napkin exercise moment, checked some stats on some retro CPUs. Even 32 clock cycles for 32-bit math seems ginormously faster than a 6502 doing 32-bit addition (multiple hundred cycles), or even a Z80 (almost a hundred cycles). This does not consider memory vs register performance, as an unknown, though...
; 6502 32 bit addition
;
ADD32 CLC ; 2 cycles
LDA A+0 ; 4 cycles
ADC B+0 ; 4 cycles
STA C+0 ; 4 cycles
LDA A+1 ; 4 cycles
ADC B+1 ; 4 cycles
STA C+1 ; 4 cycles
LDA A+2 ; 4 cycles
ADC B+2 ; 4 cycles
STA C+2 ; 4 cycles
LDA A+3 ; 4 cycles
ADC B+3 ; 4 cycles
STA C+3 ; 4 cycles
RTS ; 6 cycles
; 56 total
; Shorter, but slower
;
ADD32 LDX #$FC ; 2 cycles
LOOP LDA A-$FC,X ; 4*4 cycles
ADC B-$FC,X ; 4*4 cycles
STA C-$FC,X ; 4*4 cycles
INX ; 4*2 cycles
BNE LOOP ; 3*3 + 2 cycles
RTS ; 6 cycles
; 75 total
A DS 4
B DS 4
C DS 4
I also TIL another semi-related thing; I didn't know RISC-V 32-bit could be implemented with under 6K transistors. Native Linux executed on under 6K transistors. That's pretty cool.
The original RISC-I had something like 44K transistors [Wikipedia]!
They didn't have the transistor-count/circuitry-reuse optimizing software in 1980s that now exists today.
Apparently, the article mentions simplifications such as calculating 32-bit registers 1-bit at a time, requiring 32 clock cycles for one addition. Other than that, elsewhere, I just read that they pulled of quite a few optimization feats to cram a 32-bit native instruction set in under 6K transitors.
I wonder how efficient this RISC-V CPU would be compared to a Z-80 (~8900 transistors) in silicon manufactured from a 1980s-era fab!
EDIT: Napkin exercise moment, checked some stats on some retro CPUs. Even 32 clock cycles for 32-bit math seems ginormously faster than a 6502 doing 32-bit addition (multiple hundred cycles), or even a Z80 (almost a hundred cycles). This does not consider memory vs register performance, as an unknown, though...
I thought ADC only exists for 8-bit, you'd need ADD without carry if you combine two 8-bit registers into one 16-bit register. I thought Z80 didn't have automatic carries between consecutive 16-bit ADD's.ADC HL,DE
[6502 code]
I thought ADC only exists for 8-bit, you'd need ADD without carry if you combine two 8-bit registers into one 16-bit register. I thought Z80 didn't have automatic carries between consecutive 16-bit ADD's.
But I am now rethinking, even with corrections, I think you can do it in under 50 cycles, just not 25-30. So not much slower.
My memory is foggy about Z80's capabilities and limitations. Decades ago, I've mainly done assembly on 6502's which didn't have this convenience of combining two 8-bit registers into a 16-bit register. The 6502 only conveniently have A,X,Y all 8-bit and you'd have the hit of accessing memory for 32-bit. So 8 memory accesses.
Yes. Even before you posted, I re-evaluated and you can do add's of two 32-bit memory values (8 bytes read, 4 bytes write) on 6502 on less than 100 cycles, even with the cost of memory accesses. That's consistent with your loop and unrolled versions. So yeah, not much of a performance advantage after all.
I think I incorrectly remembered based on 32-bit floats (doing those IEEE 754's are still gunna take you low hundreds of cycles on a 6502), not integers. I have hereby been self-corrected.
Yeah, the extra transistors on those old processors do speed up 8-bit and 16-bit ops a fair bit.
That RISC-V in under 6K transistors may not be much of an advantage at the same clock frequency after all. Still, the article's RISC-V implementation is a great transistor-count optimization, that's a 32-bit CPU with fewer transistors than a Z80.
Had to pull the extended Z80 manual -- I stand corrected (Page 190). Thank you!ADD IX,BC -> fairly standard (you can do lots of the HL ops on IX/IY though not the ED instructions) - 15 cycles.
It matches the Nature article. DOIs can often take days to become active.The DOI link seems broken.
Super-capacitors are in production (US company called Skeleton makes them), I remember a UK company is using graphene to make some sort of sensor (details escape me), it's still expensive but it has found some nichesOT, but what happened to Graphene? It was getting hyped and then 3D printing or wearables became The Thing and we don't hear about it much.
Is it like aerogel, where it suddenly turns up somewhere quite mundane?
Goes back further than that. WIRED magazine were throwing around the term in the early 90’s. Ref: vol 2.02 Feb 1994 p71 “memegraphics”, vol 3.12 Dec 1995 “meme”.
Wow thermal paste, we are living in the future! I was curious where they use graphene and every reference is "could be used as" and "research shows". The commercial adoptation is very mundane, no space lifts or cures for cancer, it's more like slightly better hull coatings.
WIRED magazine vol 2.02 Feb 1994 p71 “Memegraphics”It's so old, it pre-dates calling things memes....