|
x
On 26/11/09 19:26, Bill wrote:
I honestly had never heard of a 64 bit processor. But were you aware of 8 bit (around the BBC micro/ZX Spectrum era), 16 bit (PCs around the DOS era) or 32 bit (PCs from the Windows 3.x era) processors? +++++ Good grief! No, I was not even remotely aware of such things! It's not simply down to the size of numbers that a machine can handle, after all 8 bit machines could handle numbers larger than 256, and 16 bit could handle numbers bigger than 65535. The main driver for increasing bit depth is to address larger amounts of memory, for most home users the roughly 4GB limit hasn't hit yet, but for businesses, servers capable of handling 32GB or 256GB of memory are not uncommon. Similarly there are several instances related to disk storage where the limit of 2 terabytes rear their ugly head. +++++ Are these limits inherent in some way? Bill |
x
If you have upgraded your PC in the last couple of years there is a fair
chance you are typing on one now! (the current AMD/Intel 64 bit processors also run 32 bit code to support existing apps and operating systems). ++++++++ That's what made me ask the question! I have to run a 32 bit version of the internet thingy or iPlayer won't work. Bill |
x
I suppose I've lost Bill by now, unless he was having us on about his
inability to understand this sort of stuff. ++++++ I certainly was not having you on. There's no need for me to ACT daft. Bill |
x
"Bill" wrote:
Are these limits inherent in some way? Kind of. As new microprocessors were made with increasing bit-sizes, the number of pins on the chip-packages increased enormously as the processors had to convey the information on their "data-bus" (and other busses) to and from the surrounding components on parallel connections. It required considerable advances in chip-packaging and printed-circuit-board technology to make the larger processor bit-sizes possible. The larger bit-sizes are desirable because handling data in parallel means that the processor can work that much faster. It is possible for a processor to handle data that exceeds its bit-size but only at the expense of having to chop that data up into small chunks, processing it a bit at a time, and then re-assembling it (in a way that's hidden from the ordinary computer user) but that slows its speed considerably. As for why processor bit sizes are typically 8, 16, 32, and 64: those numbers are in the mathematical sequence of powers-of-two, and it makes the internal maths easier to handle data in chunks of those sizes. -- Dave Farrance |
x
On 27/11/09 00:03, Max Demian wrote:
I suppose I've lost Bill by now, unless he was having us on about his inability to understand this sort of stuff. I *was* trying to avoid my replying looking too much like the WikiP article Bill first encountered! |
x
On 27/11/09 04:47, Bill wrote:
Are these limits inherent in some way? Yes (and no!) a 32 bit processor is limited to 2^32 bytes (= 4GB) of memory, various legacy reasons mean most can only sensibly use about 3GB of that, then there are nailed on schemes that allow machines to have more than 4GB of memory, so long as each individual program doesn't want to see more than 4GB of it. For disc storage, some programs are limited to 2GB or 4GB files, some filesystems are limited to 2^32 disc sectors of 512 bytes each (= 2TB) which is "only" the size of the largest hard disk available nowadays. The industry has to go through the pain every few years, from 8-16bit and from 16-32bit, and now from 32-64bit (for windows that is, we had 64bit VMS boxes back in the early 90's and linux has had a fairly painless 64bit option for years). Thankfully each doubling of bits has more effect than the last one, so I don't expect I'll ever have to worry about 128bit processors unless there's a breakthrough in cryogenic storage. |
x
In article , John Rumm
wrote: You still get outlook express users falling into the 2GB trap[1] and their email going bang. [...] [1] Using 32 bit signed integers, you get a wrap around from + to - at 2GB What is it exactly that is limited to 2GB? The messagebase? I don't use OE myself, but sometimes have to help people who do. Rod. -- Virtual Access V6.3 free usenet/email software from http://sourceforge.net/projects/virtual-access/ |
x
In article , Bill
writes I just wondered what '64 bit' meant. A byte (sure you've heard of that) is made up of 8 bits, or two nibbles, because it is convenient to convert binary to decimal or hexadecimal. 'Bitness' of a computer refers to how much memory it can access. Early home computers were 8-bit: they were limited in the amount of memory they could access. Maximum memory size is 64KB (65,535 bytes). Modern PCs are mostly 32-bit. The potential memory size doubles for every bit added. Maximum memory size is 4GB. We're now pushing the envelope at 4GB (mainly thanks to some brain-dead decisions by the PC and processor makers) so we now have 64-bit machines, with (something silly) maximum memory. 64-bit has been around for many years, mainly in supercomputers and workstations running an operating system called UNIX and similar OSes such as Linux which were developed with 64-bit architectures in mind, but has only recently entered the PC market. The main reasons for this are that it needs a new OS (e.g. 64-bit Windows), and hardly any existing software will run on it. And why is this message called 'x'? Because this newsreader insists that every message has a title. Why? It's a requirement so that newsreaders can thread articles (group articles with the same title together) so that you logically follow the conversation. It's also a very good idea to use a meaningful Subject: (what you call a title) then the thread will attract those interested in it. -- Mike Tomlinson |
x
In article , Bill
writes But software shouldn't be used to blindly enforce something that is protocol or manners, right as it might be. It's officiousness. I couldn't disagree more. The roads would be chaos without the Highway Code and the law. Usenet would be chaos without some rules. How would you deal with it if all the groups were amalgamated into one called 'news.all' ? There are about 60,000 groups depending on who you ask! Usenet is defined in RFC 1036: www.faqs.org/rfcs/rfc1036.html FAQ is frequently asked question. RFC is 'request for comments'. This is the traditional way of setting standards for everything 'net related - it defines protocols, etc. and goes back to RFC 1 produced in 1969. TCP/IP, the common protocol that allows totally different computers to intercommunicate, is explained in RFC 1180. Without it we'd be floating in a sea of incompatible protocols, proprietary software, and territorial battles today. You might think it's bad enough today, imagine what it would be like without some rules. The Internet began with the academic community, and as academics arrive at agreement by consensus and peer review, the title 'request for comments' was felt more appropriate than "Rules for XYZ". Usenet is a peer-to-peer protocol. This means there's no one giant server, but articles are passed between news peers agreeing to exchange messages. It is what is known as a 'distributed' resource. -- Mike Tomlinson |
x
"Bill" wrote in message
... I suppose I've lost Bill by now, unless he was having us on about his inability to understand this sort of stuff. ++++++ I certainly was not having you on. There's no need for me to ACT daft. I think the problem you hit was that Wikipedia isn't designed for any particular audience. If (as I suspect) you were looking for an explanation of the term "64-bit processor" (and how such an animal differs from processor with other numbers of bits), you are immediately redirected to the "64-bit" page you quoted, which is very technical and intended for CPU freaks. I can't find a page that explains in a clear way the difference between the "machine word" sizes of microprocessors and their significance suitable for the non-specialist (not that I count myself as one). -- Max Demian |
| All times are GMT +1. The time now is 10:57 PM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
HomeCinemaBanter.com