kenobi
1/31/2015 2:03:00 PM
W dniu sobota, 31 stycznia 2015 13:16:18 UTC+1 uzytkownik Mark Carroll napisal:
> fir <profesor.fir@gmail.com> writes:
>
> > So the question is WHY it cannot be speeded up, canot this value of this
> > 1-lvl cache to cpu throughput (in both read and write) be increased?
> > what is real technical reason that it stays fixed and not improve
>
> I'd be surprised if it hadn't been improving, but I'd guess that we have
> three main technical limitations for general CPU design:
>
> * As you start using low power in microscopically tiny wiring you start
> running into quantum effects such as tunnelling.
>
> * It takes energy to change the voltages on the wires. The faster you do
> this, the more energy you use per second, so the hotter the chip gets.
> So, you run into thermal issues as power dissipates as heat.
>
> * When voltage changes are fast enough, you also run into high-frequency
> analog issues, such as inductance with the changing magnetic fields
> inducing current in nearby wiring.
>
> With regard to the cache issue, I'd also wonder if, in terms of moving
> data around between sections of the chip, such as between cache and
> computation, even with multi-layer stacked circuits there are also
> physical space limits regarding wiring the buses in and out of
> everywhere, and, with logic gates on the path in the cicuit (to route or
> whatever), there may well be propagation delays in the signal that are
> limiting at high computation speeds.
>
> I'd be interested to be corrected by somebody who knows rather more than
> I about modern CPU design; after all, the last hardware course I took
> was last century.
>
they paralelized fpu arithmetic (like 8-way float simd, muls, divs), WHY they cannot "parallize" movs?
there must be some reason, and as i said this is absolutely critical thing, why they are not doin that?