• Posted on October 30, 2017 10:29 am
    Joseph Forbes
    No comments

    Most broadband Internet connections stay "always on" - keeping you online at all times. Home network owners often leave their router, broadband modems and other gear powered up and operating constantly, even when not using them for long periods of time, for the sake of convenience. But is it really a good idea to keep home network equipment always connected? Consider the pros and cons. Advantages of Powering Down Home Networks Security: Powering off your gear when not using it improves your network security. When network devices are offline, hackers and Wi-Fi wardrivers cannot target them. Other security measures like firewalls help and are necessary but not bulletproof. Savings on utility bills: Powering down computers, routers and modems saves money. In some countries, the savings is low, but in other parts of the world, utility costs are significant. Surge protection: Unplugging network devices prevents them from being damaged by electric power surges. Surge protectors can also prevent this kind of damage; however, surge units (particularly the inexpensive ones) cannot always protect against major power spikes like those from lightning strikes. Disadvantages of Powering Down Home Networks Noise reduction: Networking gear is much quieter than it was years ago before loud built-in fans were replaced with solid state cooling systems. Your senses might be adjusted to the relatively low levels of home network noise, but you might also be pleasantly surprised at the added tranquility of a residence without it. Hardware reliability: Frequently power cycling a computer or other networked device can shorten its working life due to the extra stress involved. Disk drives are particularly susceptible to damage. On the other hand, high temperature also greatly reduces the lifetime of network equipment. Leaving equipment always-on very possibly causes more damage from heat than will powering it down occasionally. Communication reliability: After power cycling, network connections may sometimes fail to reestablish. Special care must be taken to follow proper start-up procedures. For example, broadband modems generally should be powered on first, then other devices only later, after the modem is ready. Convenience: Network devices like routers and modems may be installed on ceilings, in basements or other hard-to-reach places. You should shut down these devices gracefully, using the manufacturer-recommend procedure, rather than merely "pulling the plug." Powering down a network takes time to do properly and may seem an inconvenience at first. The Bottom Line Home network gear need not be powered on and connected to the Internet at all times. All things considered, turning off your network during extended periods of non-use is a good idea. The security benefit alone makes this a worthwhile endeavor. Because computer networks can be difficult to set up initially, some people naturally fear disrupting it once working. In the long run, though, this practice will increase your confidence and peace of mind as a home network administrator.

    Blog Entry, Hacking, Hardware
  • Posted on October 28, 2017 10:15 am
    Joseph Forbes
    No comments

    Leave your computer on all the time, or shut it off when it's not in use; does it really make a difference? If you've been asking yourself this question, then you'll be happy to hear that you can choose whichever way you want. You just need to understand the ramifications of your choice, and take a few precautions to ensure you get the longest life you can from your computer. The most important precaution is to add a UPS (Uninterruptible Power Supply), no matter which method you choose.  A UPS can protect your computer from many of the dangers it's likely to face. The Things That Can Harm Your Computer All of the parts that make up your computer have a limited lifetime. The processor, RAM, and graphics cards all experience aging caused by, among other things, heat and temperature. Additional failure modes come from the stress of cycling a computer on and off. But it's not just your computer's semiconductors that are affected. Mechanical components, such as the ones in hard drives, optical drives, printers, and scanners, are all affected by the power cycling they may undergo when your computer is turned off or on. In many cases, peripherals, such as printers and external drives, may have circuitry that senses when your computer is powered on or off, and initiates the same condition, turning the device on or off as needed. There are other failure modes to consider that originate externally to your computer. The one most often mentioned is a power surge and power drop, where there's a sudden rise or fall in voltage on the electrical circuit that your computer is plugged into. We often associate these surges with transient events, such as nearby lightning strikes, or devices that use a lot of power at once (vacuum cleaner, hair dryer, etc). All of these failure types need to be considered. Leaving a computer turned on can reduce exposure to some of the failure types, while turning your computer off can prevent most of the external vectors that can cause the failure of a computer's components. The question then becomes, which is best: on or off? Turns out, at least in my opinion, it’s a bit of both. If your goal is to maximize lifetime, there's a time period when turning a new computer on and off makes sense; later, leaving it on 24/7 makes sense. Computer Life Testing and Failure Rates There are various failure modes that can result in your computer, well, failing. Computer manufacturers have a few tricks up their sleeves to reduce the failure rate seen by end users. What makes this interesting is that assumptions made by the manufacturer regarding warranty periods can be upset by the decision to leave a computer on 24/7; let's find out why. Computer and component manufacturers use various tests to ensure the quality of their products. One of these is known as Life testing, which uses a burn-in process that accelerates the aging rate of a device under test by cycling power, running devices at elevated voltage and temperature, and exposing the devices to conditions beyond the environment they were intended to operate in. Manufacturers found that devices that survived their infancy would continue to operate without problems until their expected lifetime was reached. Devices in their middle years rarely failed, even when exposed to conditions just outside their expected operating range. The graph demonstrating failure rate over time become known as the bathtub curve, because it looked like a bathtub viewed from the side. Components fresh off the manufacturing line would display a high failure rate when first turned on. That failure rate would drop quickly, so that in a short time, a steady but extremely low failure rate would occur over the remaining expected years. Near the end of the component's life, the failure rate would start to rise again, until it quickly reached a very high failure rate, such as that seen near the beginning of the component's life. Life testing showed that components were highly reliable once they were beyond the infancy period. Manufacturers would then offer their components after using a burn-in process that aged the devices beyond the infancy period. Customers who needed high reliability would pay extra for these burned-in devices. Typical customers for this service included the military, NASA contractors, aviation, and medical. Devices that did not go through a complex burn-in process were sold mostly for consumer use, but the manufacturers included a warranty whose time frame usually matched or exceeded the infancy time on the bathtub curve. Turning your computer off every night, or when not in use, would seem like it could be a cause for component failure, and it's true that as your computer ages, it's likely to fail when turned off or on. But it's certainly a bit counterintuitive to learn that putting stress on your system when it's young, and under warranty, may be a good thing. Remember the bathtub curve, which says that early device failure is more likely when the components are very young, and that as they age, failure rates drop? If you remove some of the expected types of stress by never power cycling your computer, you slow down the aging process. In essence, you extend the length of time the device remains susceptible to early failures. When your computer is under warranty, it may be advantageous to provide a modicum of stress by turning your computer off when not in use, so that any failure that occurs because of turn on/turn off stress happens under warranty. Leaving your computer turned on 24/7 can remove a few of the known stress events that lead to component failure, including the in-rush of current that can damage some devices, voltage swings, and surges that occur when turning a computer off. This is especially true as your computer ages and comes closer to the end of its expected life. By not cycling the power, you can protect older computers from failure, at least for a while. However, for younger computers, it may be more of a "don’t care" issue, as research has shown components in the teenage through adult years remain very stable, and don't show a likelihood of failure by conventional power cycling (turning the computer off at night). For new computers, there's the question of removing stress being an agent of slowing down aging, thus extending the time frame for early failure to occur beyond the normal warranty period. Using Both Options: Turn the Computer Off When New, and Leave On With Age Do what you can to mitigate environmental stress factors, such as operating temperature. This can be as simple as having a fan in the hot months to ensure air movement around your computer system. Use a normal turn on and turn off cycle; that is, turn the computer off when not in use during the original manufacturer's warranty period. This will help ensure all components are aged out under warranty to a time frame when failure rates fall to a low level. It also helps to ensure that any failure that may happen will occur under warranty, saving you some serious coin. Once you move beyond the warranty period, the components should have aged beyond the infant mortality time frame and entered their teenage years, when they're tough and can stand up to just about any reasonable amount of stress thrown at them. At this point, you can switch to a 24/7 operating mode, if you wish to. So, new computer, turn it on and off as needed. Teenage to adult, it's up to you; there's no real benefit either way. Senior, keep it on 24/7 to extend its life. When Running 24/7 Which is Better, Sleep or Hibernation? One possible problem with running your computer 24/7, even if it isn't actively being used, is that you may discover that your computer entered a hibernation mode that's extremely similar to turning your computer off and back on again. Depending on your computer and the OS it's running, it may support multiple types of power saving options. Generally speaking, sleep mode is designed to reduce power consumption while keeping the computer in a semi-operational state. In this mode, your computer spins down any hard drives and optical drives it may have. RAM is powered down to a lower activity state. Displays are usually dimmed, if not outright powered off. Processors run with a reduced clock rate or in a special low-level state. In sleep mode, the computer can usually continue to run some basic tasks, though not as speedily as in a normal state. Most open user apps are still loaded but are in a standby state. There are exceptions, depending on your OS, but you get the idea. Sleep mode conserves power while keeping the computer turned on. Hibernation, another version of reducing power consumption, varies a bit between Mac, Windows, and Linux OSes. In hibernation mode, apps that are running are put into a standby state, and then the content of RAM is copied to your computer's storage device. At that point, RAM and the storage devices are powered off. Most peripherals are put into standby mode, including the display. Once all data has been secured, the computer is essentially turned off. Restarting from hibernation mode isn't much different, at least as experienced by the components that make up your computer, than turning your computer on. As you can see, if you haven’t ensured that your computer won't enter its hibernation mode after some amount of time, you're not really keeping your computer on 24/7. So, you may not be realizing the effect you wanted to achieve by not turning your computer off. If your intent is to run your computer 24/7 to perform various processing tasks, you'll want to disable all sleep modes except for display sleep. You probably don’t need the display to be active to run any of the tasks. The method for using only display sleep is different for the various operating systems. Some OSes have another sleep mode that allows specified tasks to run while placing all remaining tasks in standby mode. In this mode, power is conserved but processes that need to be run are allowed to continue. In the Mac OS, this is known as App Nap. Windows has an equivalent known as Connected Standby, or Modern Standby in Windows 10. No matter what it's called, or the OS it runs on, the purpose is to conserve power while allowing some apps to run. In regard to running your computer 24/7, this type of sleep mode doesn't exhibit the type of power cycling seen in hibernation mode, so it could meet the needs of those who don't wish to turn their computers off. Leave the Computer On or Turn It Off: Final Thoughts If you're asking if it's safe to turn your computer on and off as needed, the answer is yes. It's not something I would worry about until the computer reaches old age. If you're asking if it's safe to leave a computer on 24/7, I would say the answer is also yes, but with a couple of caveats. You need to protect the computer from external stress events, such as voltage surges, lightning strikes, and power outages; you get the idea. Of course, you should be doing this even if you plan to turn the computer on and off, but the risk is slightly greater for computers left on 24/7, only because it's likely they'll be turned on when a severe event occurs, such as a summer thunderstorm rolling through your area.

    Blog Entry, Hardware, KnowledgeBase (KB)
  • Posted on October 2, 2017 10:45 am
    Joseph Forbes
    No comments

    The latency of a network connection represents the amount of time required for data to travel between the sender and receiver. While all computer networks possess some inherent amount of latency, the amount varies and can suddenly increase for various reasons. People perceive these unexpected time delays as lag. The Speed of Light On a Computer Network No network traffic can travel faster than the speed of light. On a home or local area network, the distance between devices is so small that light speed does not matter, but for Internet connections, it becomes a factor. Under perfect conditions, light requires roughly 5 ms to travel 1,000 miles (about 1,600 kilometers). Furthermore, most long-distance Internet traffic travels over cables, which cannot carry signals as fast as light due to a principle of physics called refraction. Data over a fiber optic cable, for example, requires at least 7.5 ms to travel 1,000 miles. Typical Internet Connection Latencies Besides the limits of physics, additional network latency is caused when traffic is routed through Internet servers and other backbone devices. The typical latency of an Internet connection also varies depending on its type. The study Measuring Broadband America - February 2013 reported these typical Internet connection latencies for common forms of U.S. broadband service: fiber optic: 18 ms cable Internet: 26 ms DSL: 44 ms satellite Internet: 638 ms Causes of Lag on Internet Connections The latencies of Internet connections fluctuate small amounts from one minute to the next, but the additional lag from even small increases becomes noticeable when surfing the Web or running online applications. The following are common sources of Internet lag: Internet traffic load: Spikes in Internet utilization during peak usage times of day often cause lag. The nature of this lag varies by service provider and a person's geographic location. Unfortunately, other than moving locations or changing Internet service, an individual user cannot avoid this kind of lag. Online application load: Multiplayer online games, Web sites, and other client-server network applications utilize shared Internet servers. If these servers become overloaded with activity, the clients experience lag. Weather and other wireless interference: Satellite, fixed wireless broadband, and other wireless Internet connections are particularly susceptible to signal interference from rain. Wireless interference causes network data to be corrupted in transit, causing lag from re-transmission delays. Lag switches: Some people who play online games install a device called a lag switch on their local network. A lag switch is specially designed to intercept network signals and introduce significant delays into the flow of data back to other gamers connected to a live session. You can do little to solve this kind of lag problem other than avoiding playing with those who use lag switches; fortunately, they are relatively uncommon. Causes of Lag on Home Networks Sources of network lag also exist inside a home network as follows: Overloaded router or modem: Any network router will eventually bog down if too many active clients are using it at the same time. Network contention among multiple clients means that they are sometimes waiting for each other's requests to be processed, causing lag. A person can replace their router with a more powerful model, or add another router to the network, to help alleviate this problem. Similarly, network contention occurs on a residence's modem and connection to the Internet provider if saturated with traffic: Depending on the speed of your Internet link, try to avoid too many simultaneous Internet downloads and online sessions to minimize this lag. Overloaded client device: PCs and other client devices also become a source of network lag if unable to process network data quickly enough. While modern computers are sufficiently powerful in most situations, they can slow down significantly if too many applications are running simultaneously. Even running applications that do not generate network traffic can introduce lag; for example, a misbehaving program can consume 100 percent of the available CPU utilization on a device that delays the computer from processing network traffic for other applications. Malware: A network worm hijacks a computer and its network interface, which can cause it to perform sluggishly, similar to being overloaded. Running antivirus software on network devices helps to detect these worms. Use of wireless: Enthusiast online gamers, as an example, often prefer to run their devices over wired Ethernet instead of Wi-Fi because home Ethernet supports lower latencies. While the savings is typically only a few milliseconds in practice, wired connections also avoid the risk of wireless interference that results in significant lag if it occurs. How Much Lag Is Too Much? The impact of lag depends on what a person is doing on the network and, to some degree, the level of network performance they have grown accustomed to. Users of ​satellite Internet, expect very long latencies and tend not to notice a temporary lag of an additional 50 or 100 ms. Dedicated online gamers, on the other hand, strongly prefer their network connection to run with less than 50 ms of latency and will quickly notice any lag above that level. In general, online applications perform best when network latency stays below 100 ms and any additional lag will be noticeable to users.

    Blog Entry, DATA, Hardware
  • Posted on September 20, 2017 9:35 am
    Joseph Forbes
    No comments

    Whether you're a home PC user or a network administrator, you always need a plan for when the unexpected happens to your computers and/or network. A Disaster Recovery Plan (DRP) is essential in helping to ensure that you don't get fired after a server gets fried in a fire, or in the case of the home user, that you don't get kicked out of the house when mamma discovers you've just lost years worth of irreplaceable digital baby photos. A DRP doesn't have to be overly complicated. You just need to cover the basic things that it will take to get back up and running again if something bad happens. Here are some items that should be in every good disaster recovery plan: 1. Backups, Backups, Backups! Most of us think about backups right after we've lost everything in a fire, flood, or burglary. We think to ourselves, "I sure hope I have a backup of my files somewhere". Unfortunately, wishing and hoping won't bring back dead files or keep your wife from flogging you about the head and neck after you've lost gigabytes of family photos. You need to have a plan for regularly backing up your critical files so that when a disaster occurs you can recover what was lost. There are dozens of online backup services available that will backup your files to an off-site location via a secure connection. If you don't trust "The Cloud" you can elect to keep things in-house by purchasing an external backup storage device such as a Drobo. Whichever method you choose, make sure you set a schedule to backup all your files at least once weekly, with incremental backups each night if possible. Additionally, you should periodically make a copy of your backup and store it off-site in a fire safe, safe deposit box, or somewhere other than where your computers reside. Off-site backups are important because your backup is useless if it's burned up in the same fire that just torched your computer. 2. Document Critical Information If you encounter a major disaster, you're going to loose a lot of information that may not be inside of a file. This information will be critical to getting back to normal and includes items such as: Make, model, and warranty information for all your computers and other peripherals Account names and passwords (for e-mail, ISP, wireless routers, wireless networks, admin accounts, System BIOS) Network settings (IP addresses of all PCs, firewall rules, domain info, server names) Software license information (list of installed software, license keys for re-installation, version info) Support phone numbers (for ISP, PC manufacturer, network administrators, tech support) 3. Plan for Extended Downtime If you're a network administrator you'll need to have a plan that covers what you will do if the downtime from the disaster is expected to last more than a few days. You'll need to identify possible alternate sites to house your servers if your facilities are going to be unusable for an extended period of time. Check with your management prior to looking into alternatives to get their buy-in. Ask them questions such as: How much downtime is tolerable to them based on their business needs? What is the restoration priority (which systems do they want back online first)? What is their budget for disaster recovery operations and preparation? 4. Plan for Getting Back to Normal You'll need transition plan for moving your files off of the loaner you borrowed and onto the new PC you bought with your insurance check, or for moving from your alternate site back to your original server room after its been restored to normal. Test and update your DRP regularly. Make sure you keep your DRP up-to-date with all the latest information (updated points of contact, software version information, etc). Check your backup media to make sure it is actually backing something up and not just sitting idle. Check the logs to make sure the backups are running on the schedule you setup. Again, your disaster recovery plan shouldn't be overly complicated. You want to make it useful and something that is always within arms reach. Keep a copy of it off-site as well. Now if I were you, I would go start backing up those baby pics ASAP!

    Blog Entry, DATA, Data Recovery
  • Posted on September 17, 2017 9:30 am
    Joseph Forbes
    No comments

    Whether you're managing disaster preparation activities for a small business or a large corporation, you need to plan for natural disasters because, as we all know, information technology and water don't mix well. Let's go over some basic steps you'll need to take to ensure that your network and IT investments survive in the event of a disaster such as a flood or hurricane. 1. Develop a Disaster Recovery Plan The key to successfully recovering from a natural disaster is to have a good disaster recovery plan in place before something bad happens. This plan should be periodically tested to ensure that all parties involved know what they are supposed to do during a disaster event. The National Institute of Standards and Technology (NIST) has excellent resources on how to develop disaster recovery plans. Check out NIST Special Publication 800-34 on Contingency Planning to find out how to get started developing a rock-solid disaster recovery plan. 2. Get Your Priorities Straight: Safety First. Obviously, protecting your people is the most important thing. Never put your network and servers ahead of keeping your staff safe. Never operate in an unsafe environment. Always ensure that facilities and equipment have been deemed safe by the proper authorities before any recovery or salvage operations begin. Once safety issues have been addressed, you should have a system restoration priority so you can focus on what it will take to stand up your critical infrastructure and servers at an alternate location. Have management identify which business functions they want back online first and then focus planning on restoring what is needed to ensure safe recovery of mission critical systems. 3. Label and Document Your Network and Equipment. Pretend that you just found out that a major storm is two days away and it is going to flood your building. Most of your infrastructure is in the basement of the building which means you are going to have to relocate the equipment elsewhere. The tear down process will likely be rushed so you need to have your network well documented so that you can resume operations at an alternate location. Accurate network diagrams are essential for guiding network technicians as they reconstruct your network at the alternate site. Label things as much as you can with straightforward naming conventions that everyone on your team understands. Keep a copy of all network diagram information at an offsite location. 4. Prepare to Move Your IT Investments to Higher Ground. Since our friend gravity likes to keep water at the lowest point possible, you'll want to plan to relocate your infrastructure equipment to higher ground in the event of a major flood. Make arrangements with your building manager to have a safe storage location on a non-flood prone floor where you can temporarily move network equipment that might be flooded in the event of a natural disaster. If the entire building is likely to be trashed or flooded, find an alternate site that is not in a flood zone. You can visit the FloodSmart.gov website and enter in the address of your potential alternate site to see if it is located in a flood zone or not. If it is in a high risk flood area, you may want to consider relocating your alternate site. Make sure your disaster recovery plan covers the logistics of who's going to move what, how they are going to do it, and when they are going to move operations to the alternate site.. Move the expensive stuff first (switches, routers, firewalls, servers) and least expensive stuff last (PCs and Printers). If you're designing a server room or data center, consider locating it in an area of your building that won't be prone to flooding such as a non-ground level floor, this will save you the headache of relocating equipment during a flood. 5. Make Sure You Have Good Backups Before a Disaster Strikes. If you don't have good backups to restore from then it won't matter if you have an alternate site because you won't be able to restore anything of value. Check to make sure your scheduled backups are working and check backup media to make sure it is actually capturing data. Be vigilant. Make sure that your administrators are reviewing backup logs and that backups are not silently failing.

    DATA, Hardware, Security