![]() |
| If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|||||||
|
|
Thread Tools | Display Modes |
|
#31
|
|||
|
|||
|
On Sun, 01 Jun 2008 23:21:28 +0100, Andy Burns
wrote: On 01/06/2008 23:02, Andy Champ wrote: Our critical servers are all fitted with dual PSUs. The idea is that one is fed from the UPS, and one straight from the mains. Lose the mains, and the UPS kicks in. Lose the UPS, and they run straight from the mains. Early morning generator test occurs, you loose mains for a few seconds, no problem UPS cuts in, generator starts, that extra server you added recently increases the inrush current as half of your PSUs come back on line at once, mains MCB trips, current drawn by the other half of your PSUs is double normal, workload increases on the servers as people start logging on for the day, servers draw that critical few extra amps, a circuit breaker on the UPS reaches its limit and it trips, it goes dark and quiet ... ...or.. My favourite is the 'Domestos' scenario - where the UPS decides that the mains frequency isn't near enough 50Hz even, ironically, though it's good enough for the purpose. It then kicks in only to discover its batteries are gubbed and, in its death throes, puts out *severe* spikes killing everything it feeds..dead. We had this with the transmission system at our old building and it killed a Sony LMS pretty convincingly. If you get these sorts of spikes they will likely rampage through the UPS connected PSUs blowing them and the attached motherboards or, if lucky, just cause them to hang. Because of this kind of situation, IMHO, anywhere there's a UPS, there's a single point of failure for all the attached devices regardless of almost any other redundancy provision. -- Z |
|
#32
|
|||
|
|||
|
"Zathras" wrote in message news ![]() On Sun, 01 Jun 2008 23:21:28 +0100, Andy Burns wrote: My favourite is the 'Domestos' scenario - where the UPS decides that the mains frequency isn't near enough 50Hz even, ironically, though it's good enough for the purpose. It then kicks in only to discover its batteries are gubbed and, in its death throes, puts out *severe* spikes killing everything it feeds..dead. The UPS failure I mentioned in my previous message was caused by an overload on one of the phases, we soon discovered that some offices in the building considered 'critical business equipment' to include the photo-copier, coffee machine, fridge and a microwave. They had managed to string 4-gang extension right they way round the office to ensure almost every piece of electrical equipment was connected to the wonderful UPS, therefore overloading one of the phases on the carefully specified UPS. //Clive. |
|
#33
|
|||
|
|||
|
"Zathras" wrote in message ... On Sun, 1 Jun 2008 20:59:13 +0100, ":Jerry:" wrote: "Zathras" wrote in message . .. snip At out new building, we don't have any - just two kinetic batteries and a generator that comes up before the KB's spin down. I assume by "kinetic batteries" you mean 'motor-flywheel-generators', if so, how is this implemented? Two contra-rotating units (fearlessly) located on the roof. They supply energy instantly and for a longer time than the generator takes to start and stabilise. I don't have any more detail of our installation so for more than that, Google is the place to look - KBs are amazing devices! Where I worked, we just had a generator and no other provision for maintaining power when the mains went down. One notorious day, the mains went off so all the computers went down. A few seconds later (but too late to keep the computers going) there was an almighty roar from outside the window, accompanied by clouds of black smoke, as the generator kicked in. So the power came back. Just as we'd started to boot up the servers, there was a loud bang from the generator, a blinding flash of blue and a huge sheet or orange flame: it transpired that the power being drawn was vastly in excess of what the generator was rated to supply and it had overloaded, setting fire to the generator windings and then the tank of diesel. The folly of blocking off one of the car-park entrances as a security measure was highlighted when the fire engines tried to use it as the designated route to avoid the crowds of mingling people who had been evacuated into the car park, and found that they couldn't any longer - a classic case of one thing leading to another: lots of lessons were learned that day! At the other end of the scale, when I was working in a small office and there was a power cut, they realised that not only were the computers down but also the phone system failed because it was VOIP and the router had no power. I saved the day when i remembered that I had a 12V-240V inveter in my car (useful for powering my laptop etc on the move) and we hooked it up to the phone system so we could still take phone calls. A classic "all eggs in one basket" failure; I think they changed things afterwards so that a power failure in the router failed over to a conventional analogue line with a non-cordless phone. |
|
#34
|
|||
|
|||
|
Yes, that story took place some years ago nearer the start of my IT
career (which was itself a second career) than the beginning, but towards the end of it, after some promotion, I was in a party from the same firm invited to IBM in the States, where they demonstrated all their latest servers, which featured dual everything and the then latest hotplug technology. The demo was quite impressive. A rather nice petite girl set up a presentation running off a server via the network, and then invited some of down one by one to pull bits out of the server. We removed a PSU, a netcard, some RAM, a disk, and I can't remember what else, and it never even blinked. But the thing about my original story that I could never get my head round was the stupidity of doing a test like that during office hours. If the supply had never failed, or had failed out of office hours, the test lost more collective man-hours than if we'd never had a UPS at all, while if the supply had failed during office hours, we would have been no worse off having no UPS at all! And either way, we would have saved the cost of the equipment and the 'maintenance' contract. Such a system doesn't make any sense if it is tested at a time when the potential consequences are as catastrophic as that which the system itself is supposed to be guarding against. It only makes sense testing it at times when the consequences are not severe.. On Sun, 01 Jun 2008 23:02:41 +0100, Andy Champ wrote: Java Jive wrote: Somehow, I can't show any surprise ... snips long scary story Our critical servers are all fitted with dual PSUs. [snip] Repeat after me: "No Single Point Of Failure" ![]() |
|
#35
|
|||
|
|||
|
On Mon, 02 Jun 2008 09:42:06 +0100, Zathras wrote:
It's regularly run, tested and maintained - Hopefully more than check the oil, start, stopped and left until the next "test". It should be checked, started, run with a hefty load for several hours then stopped and checked again. if it's like our last one, probably a twin engine unit. Single alternator or a "twin set"? Two engines, two alternators and syncronishing switch gear. Totally reliable IME. A lot of people say that, until the system is pressed into serious action. B-) -- Cheers Dave. |
|
#36
|
|||
|
|||
|
"Mortimer" wrote in message et... snip A classic "all eggs in one basket" failure; I think they changed things afterwards so that a power failure in the router failed over to a conventional analogue line with a non-cordless phone. I always understood it as a H&S pre-requisite that at least one phone had to be connected (or at least fall-over to) a conventional analogue line for exactly the reason you cite - and that phone had to be sited were anyone could use it? |
|
#37
|
|||
|
|||
|
"Clive" wrote in message ... "Zathras" wrote in message news ![]() On Sun, 01 Jun 2008 23:21:28 +0100, Andy Burns wrote: My favourite is the 'Domestos' scenario - where the UPS decides that the mains frequency isn't near enough 50Hz even, ironically, though it's good enough for the purpose. It then kicks in only to discover its batteries are gubbed and, in its death throes, puts out *severe* spikes killing everything it feeds..dead. The UPS failure I mentioned in my previous message was caused by an overload on one of the phases, we soon discovered that some offices in the building considered 'critical business equipment' to include the photo-copier, coffee machine, fridge and a microwave. They had managed to string 4-gang extension right they way round the office to ensure almost every piece of electrical equipment was connected to the wonderful UPS, therefore overloading one of the phases on the carefully specified UPS. There really is a case for all office equipment to be 'hard wired' into their respective power source, or at least any equipment/circuit that is UPS 'protected'... |
|
#38
|
|||
|
|||
|
On Sun, 1 Jun 2008 12:27:06 +0100, "Stephen"
wrote: For a while there was a newer looking fault caption on ITV2+1 on Freeview. It said "There is a Fault. Normal service will be resumed as soon as possible." in yellow letters on a light blue background (electronically generated), accompanied by a woman's voice reading out the same words. The voice sounded a bit over compressed or narrow bandwidth. It was different from ITV2+1 on satellite and unlike any of the other the fault captions. ITV2+1 on Freeview is an oddity, being on a National Grid Wireless multiplex instead of an ITV one, so could this have been a Red Bee fault caption from BBC White City perhaps? No, that's the generic fault caption (and audio) generated by National Grid Wireless when an incoming feed goes down. -- |
|
#39
|
|||
|
|||
|
On Sun, 1 Jun 2008 16:48:32 +0100, "Stephen"
wrote: I thought that Freeview multiplexes 1, B, C, D were put together by BBC/RedBee coding & multiplexing. ITV2+1 is on mux C or D (don't remember which), so if it doesn't go there I wonder where it does go? No, Mux C and D are operated by National Grid Wireless. Nothing to do with the BBC or Red Bee in any way. -- |
|
#40
|
|||
|
|||
|
"Zero Tolerance" wrote in message ... On Sun, 1 Jun 2008 12:27:06 +0100, "Stephen" wrote: For a while there was a newer looking fault caption on ITV2+1 on Freeview. It said "There is a Fault. Normal service will be resumed as soon as possible." in yellow letters on a light blue background (electronically generated), accompanied by a woman's voice reading out the same words. I note the lack of words "We apologise ...." on this new, electronically generated version. |
| Thread Tools | |
| Display Modes | |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Passthrough failure on Sky+ ? | Tumbleweed | UK sky | 12 | January 29th 06 11:14 AM |
| DLP and Power Failure | Bill Oertell | High definition TV | 10 | December 16th 04 01:38 PM |
| Hughes DVR failure | Owner | Tivo personal television | 3 | April 8th 04 02:26 AM |
| Disk failure | Chris | Tivo personal television | 4 | January 25th 04 10:09 PM |
| Sky+ Failure | Andrew Corcoran | UK sky | 2 | September 30th 03 11:10 PM |