• Posted on October 2, 2017 10:45 am
    Joseph Forbes
    No comments

    The latency of a network connection represents the amount of time required for data to travel between the sender and receiver. While all computer networks possess some inherent amount of latency, the amount varies and can suddenly increase for various reasons. People perceive these unexpected time delays as lag. The Speed of Light On a Computer Network No network traffic can travel faster than the speed of light. On a home or local area network, the distance between devices is so small that light speed does not matter, but for Internet connections, it becomes a factor. Under perfect conditions, light requires roughly 5 ms to travel 1,000 miles (about 1,600 kilometers). Furthermore, most long-distance Internet traffic travels over cables, which cannot carry signals as fast as light due to a principle of physics called refraction. Data over a fiber optic cable, for example, requires at least 7.5 ms to travel 1,000 miles. Typical Internet Connection Latencies Besides the limits of physics, additional network latency is caused when traffic is routed through Internet servers and other backbone devices. The typical latency of an Internet connection also varies depending on its type. The study Measuring Broadband America - February 2013 reported these typical Internet connection latencies for common forms of U.S. broadband service: fiber optic: 18 ms cable Internet: 26 ms DSL: 44 ms satellite Internet: 638 ms Causes of Lag on Internet Connections The latencies of Internet connections fluctuate small amounts from one minute to the next, but the additional lag from even small increases becomes noticeable when surfing the Web or running online applications. The following are common sources of Internet lag: Internet traffic load: Spikes in Internet utilization during peak usage times of day often cause lag. The nature of this lag varies by service provider and a person's geographic location. Unfortunately, other than moving locations or changing Internet service, an individual user cannot avoid this kind of lag. Online application load: Multiplayer online games, Web sites, and other client-server network applications utilize shared Internet servers. If these servers become overloaded with activity, the clients experience lag. Weather and other wireless interference: Satellite, fixed wireless broadband, and other wireless Internet connections are particularly susceptible to signal interference from rain. Wireless interference causes network data to be corrupted in transit, causing lag from re-transmission delays. Lag switches: Some people who play online games install a device called a lag switch on their local network. A lag switch is specially designed to intercept network signals and introduce significant delays into the flow of data back to other gamers connected to a live session. You can do little to solve this kind of lag problem other than avoiding playing with those who use lag switches; fortunately, they are relatively uncommon. Causes of Lag on Home Networks Sources of network lag also exist inside a home network as follows: Overloaded router or modem: Any network router will eventually bog down if too many active clients are using it at the same time. Network contention among multiple clients means that they are sometimes waiting for each other's requests to be processed, causing lag. A person can replace their router with a more powerful model, or add another router to the network, to help alleviate this problem. Similarly, network contention occurs on a residence's modem and connection to the Internet provider if saturated with traffic: Depending on the speed of your Internet link, try to avoid too many simultaneous Internet downloads and online sessions to minimize this lag. Overloaded client device: PCs and other client devices also become a source of network lag if unable to process network data quickly enough. While modern computers are sufficiently powerful in most situations, they can slow down significantly if too many applications are running simultaneously. Even running applications that do not generate network traffic can introduce lag; for example, a misbehaving program can consume 100 percent of the available CPU utilization on a device that delays the computer from processing network traffic for other applications. Malware: A network worm hijacks a computer and its network interface, which can cause it to perform sluggishly, similar to being overloaded. Running antivirus software on network devices helps to detect these worms. Use of wireless: Enthusiast online gamers, as an example, often prefer to run their devices over wired Ethernet instead of Wi-Fi because home Ethernet supports lower latencies. While the savings is typically only a few milliseconds in practice, wired connections also avoid the risk of wireless interference that results in significant lag if it occurs. How Much Lag Is Too Much? The impact of lag depends on what a person is doing on the network and, to some degree, the level of network performance they have grown accustomed to. Users of ​satellite Internet, expect very long latencies and tend not to notice a temporary lag of an additional 50 or 100 ms. Dedicated online gamers, on the other hand, strongly prefer their network connection to run with less than 50 ms of latency and will quickly notice any lag above that level. In general, online applications perform best when network latency stays below 100 ms and any additional lag will be noticeable to users.

    Blog Entry, DATA, Hardware
  • Posted on September 20, 2017 9:35 am
    Joseph Forbes
    No comments

    Whether you're a home PC user or a network administrator, you always need a plan for when the unexpected happens to your computers and/or network. A Disaster Recovery Plan (DRP) is essential in helping to ensure that you don't get fired after a server gets fried in a fire, or in the case of the home user, that you don't get kicked out of the house when mamma discovers you've just lost years worth of irreplaceable digital baby photos. A DRP doesn't have to be overly complicated. You just need to cover the basic things that it will take to get back up and running again if something bad happens. Here are some items that should be in every good disaster recovery plan: 1. Backups, Backups, Backups! Most of us think about backups right after we've lost everything in a fire, flood, or burglary. We think to ourselves, "I sure hope I have a backup of my files somewhere". Unfortunately, wishing and hoping won't bring back dead files or keep your wife from flogging you about the head and neck after you've lost gigabytes of family photos. You need to have a plan for regularly backing up your critical files so that when a disaster occurs you can recover what was lost. There are dozens of online backup services available that will backup your files to an off-site location via a secure connection. If you don't trust "The Cloud" you can elect to keep things in-house by purchasing an external backup storage device such as a Drobo. Whichever method you choose, make sure you set a schedule to backup all your files at least once weekly, with incremental backups each night if possible. Additionally, you should periodically make a copy of your backup and store it off-site in a fire safe, safe deposit box, or somewhere other than where your computers reside. Off-site backups are important because your backup is useless if it's burned up in the same fire that just torched your computer. 2. Document Critical Information If you encounter a major disaster, you're going to loose a lot of information that may not be inside of a file. This information will be critical to getting back to normal and includes items such as: Make, model, and warranty information for all your computers and other peripherals Account names and passwords (for e-mail, ISP, wireless routers, wireless networks, admin accounts, System BIOS) Network settings (IP addresses of all PCs, firewall rules, domain info, server names) Software license information (list of installed software, license keys for re-installation, version info) Support phone numbers (for ISP, PC manufacturer, network administrators, tech support) 3. Plan for Extended Downtime If you're a network administrator you'll need to have a plan that covers what you will do if the downtime from the disaster is expected to last more than a few days. You'll need to identify possible alternate sites to house your servers if your facilities are going to be unusable for an extended period of time. Check with your management prior to looking into alternatives to get their buy-in. Ask them questions such as: How much downtime is tolerable to them based on their business needs? What is the restoration priority (which systems do they want back online first)? What is their budget for disaster recovery operations and preparation? 4. Plan for Getting Back to Normal You'll need transition plan for moving your files off of the loaner you borrowed and onto the new PC you bought with your insurance check, or for moving from your alternate site back to your original server room after its been restored to normal. Test and update your DRP regularly. Make sure you keep your DRP up-to-date with all the latest information (updated points of contact, software version information, etc). Check your backup media to make sure it is actually backing something up and not just sitting idle. Check the logs to make sure the backups are running on the schedule you setup. Again, your disaster recovery plan shouldn't be overly complicated. You want to make it useful and something that is always within arms reach. Keep a copy of it off-site as well. Now if I were you, I would go start backing up those baby pics ASAP!

    Blog Entry, DATA, Data Recovery
  • Posted on September 17, 2017 9:30 am
    Joseph Forbes
    No comments

    Whether you're managing disaster preparation activities for a small business or a large corporation, you need to plan for natural disasters because, as we all know, information technology and water don't mix well. Let's go over some basic steps you'll need to take to ensure that your network and IT investments survive in the event of a disaster such as a flood or hurricane. 1. Develop a Disaster Recovery Plan The key to successfully recovering from a natural disaster is to have a good disaster recovery plan in place before something bad happens. This plan should be periodically tested to ensure that all parties involved know what they are supposed to do during a disaster event. The National Institute of Standards and Technology (NIST) has excellent resources on how to develop disaster recovery plans. Check out NIST Special Publication 800-34 on Contingency Planning to find out how to get started developing a rock-solid disaster recovery plan. 2. Get Your Priorities Straight: Safety First. Obviously, protecting your people is the most important thing. Never put your network and servers ahead of keeping your staff safe. Never operate in an unsafe environment. Always ensure that facilities and equipment have been deemed safe by the proper authorities before any recovery or salvage operations begin. Once safety issues have been addressed, you should have a system restoration priority so you can focus on what it will take to stand up your critical infrastructure and servers at an alternate location. Have management identify which business functions they want back online first and then focus planning on restoring what is needed to ensure safe recovery of mission critical systems. 3. Label and Document Your Network and Equipment. Pretend that you just found out that a major storm is two days away and it is going to flood your building. Most of your infrastructure is in the basement of the building which means you are going to have to relocate the equipment elsewhere. The tear down process will likely be rushed so you need to have your network well documented so that you can resume operations at an alternate location. Accurate network diagrams are essential for guiding network technicians as they reconstruct your network at the alternate site. Label things as much as you can with straightforward naming conventions that everyone on your team understands. Keep a copy of all network diagram information at an offsite location. 4. Prepare to Move Your IT Investments to Higher Ground. Since our friend gravity likes to keep water at the lowest point possible, you'll want to plan to relocate your infrastructure equipment to higher ground in the event of a major flood. Make arrangements with your building manager to have a safe storage location on a non-flood prone floor where you can temporarily move network equipment that might be flooded in the event of a natural disaster. If the entire building is likely to be trashed or flooded, find an alternate site that is not in a flood zone. You can visit the FloodSmart.gov website and enter in the address of your potential alternate site to see if it is located in a flood zone or not. If it is in a high risk flood area, you may want to consider relocating your alternate site. Make sure your disaster recovery plan covers the logistics of who's going to move what, how they are going to do it, and when they are going to move operations to the alternate site.. Move the expensive stuff first (switches, routers, firewalls, servers) and least expensive stuff last (PCs and Printers). If you're designing a server room or data center, consider locating it in an area of your building that won't be prone to flooding such as a non-ground level floor, this will save you the headache of relocating equipment during a flood. 5. Make Sure You Have Good Backups Before a Disaster Strikes. If you don't have good backups to restore from then it won't matter if you have an alternate site because you won't be able to restore anything of value. Check to make sure your scheduled backups are working and check backup media to make sure it is actually capturing data. Be vigilant. Make sure that your administrators are reviewing backup logs and that backups are not silently failing.

    DATA, Hardware, Security
  • Posted on September 9, 2017 9:29 am
    Joseph Forbes
    No comments

    So, at this point it is hard to say that current quantum computers (well more like chips) have integrated within them RAM (random access memory), ROM (read only memory), hard drives (another memory component), and buses or data lines — but there are developments in quantum computer architectures, with most of them being quantum-classical hybrid architectures, where you certainly do have components that manipulate qubits or qudits (so hot right now) to carry out calculations on quantum circuits (like a processor), schemes to store this information onto other qubits (memory), and then also systems to carry out measurements after our manipulations to get the result of our quantum informatical computation (with quantum information relayed through optical buses). What are quantum computers/chips (or just qubits) usually made out of currently? Well it’s like the Cambrian Explosion (ok, maybe not that diverse) but with different quantum computing architectues or substrates if you will — each with their advantages and disadvantages. Trapped ion qubits, superconducting circuit qubits, diamond nitrogen vacancy qubits are some examples of physical representations or realizations of qubits — generally you can use something as a qubit as long as you can manipulate the state of said quantum entity (electrons, photons ~ usually spins in which case, some other field quantity, etc) in a two-level (so you get the 0 or 1 state or superpositions of these) or multi-level (qudits can take on these multi-level states, so think 0, 1, 2 … n-states) fashion. You usually end up manipulating these quantum states using optics, so photons, or different electromagnetic frequencies such as microwaves or radio-frequency waves — and often in composite pulses (there are experimental benefits to doing this where delivering our manipulations in pulse sequences offers robustness to noise or other errors as opposed to a single pulse). Another aspect of current quantum computing hardware you may be familiar with is the near universal requirement for cryogenic temperatures (from balmy and warm 1~10 Kelvin to colder milli-Kelvin to even colder micro-Kelvin temperatures) as well as pulling a hard vacuum for certain qubit modes (notably trapped ion methods). And the field really is rapidly marching forward with centers in the US, Europe, Australia (these three nations are heavily invested in bringing forth a working universal quantum computer), Asia (China focuses heavily on quantum informatic experiments, Japan and Korea also have their sights set more on quantum information manipulation with quantum optical systems). In fact there are proposals to build a “football” sized quantum computer based on trapped ion methods — owing to their good entanglement lifetimes, scalability and straight forward manipulation and addressing of qubits (trapped ions are also a pretty legacy technology, used in things like atomic clocks or as components in mass spectrometry work flows) — this is pretty exciting and would actually be a great thing to invest in and make it a big public science project much like how CERN’s LHC or the LIGOs might be operated and used. You can obviously see where this would be a good first step but might need improvement to break the mainstream commercial barrier — the intense energy costs of pulling a hard vacuum and maintaining milli- to micro-Kelvin temperatures (maybe you can do some clever thermal engineering and integration to save some money but the operating costs will still be steep). Nonetheless it would be a wonderful undertaking and could prove valuable in guiding the design and development of future quantum devices (maybe with the bulky and expensive quantum computer you can figure out how to build a more compact and energy efficient/heat-resistant one).

    Blog Entry, DATA, EDUCATION
  • Posted on September 2, 2017 9:36 am
    Joseph Forbes
    No comments

    Whether you're a home PC user or a network administrator, you always need a plan for when the unexpected happens to your computers and/or network. A Disaster Recovery Plan (DRP) is essential in helping to ensure that you don't get fired after a server gets fried in a fire, or in the case of the home user, that you don't get kicked out of the house when mamma discovers you've just lost years worth of irreplaceable digital baby photos. A DRP doesn't have to be overly complicated. You just need to cover the basic things that it will take to get back up and running again if something bad happens. Here are some items that should be in every good disaster recovery plan: 1. Backups, Backups, Backups! Most of us think about backups right after we've lost everything in a fire, flood, or burglary. We think to ourselves, "I sure hope I have a backup of my files somewhere". Unfortunately, wishing and hoping won't bring back dead files or keep your wife from flogging you about the head and neck after you've lost gigabytes of family photos. You need to have a plan for regularly backing up your critical files so that when a disaster occurs you can recover what was lost. There are dozens of online backup services available that will backup your files to an off-site location via a secure connection. If you don't trust "The Cloud" you can elect to keep things in-house by purchasing an external backup storage device such as a Drobo. Whichever method you choose, make sure you set a schedule to backup all your files at least once weekly, with incremental backups each night if possible. Additionally, you should periodically make a copy of your backup and store it off-site in a fire safe, safe deposit box, or somewhere other than where your computers reside. Off-site backups are important because your backup is useless if it's burned up in the same fire that just torched your computer. 2. Document Critical Information If you encounter a major disaster, you're going to loose a lot of information that may not be inside of a file. This information will be critical to getting back to normal and includes items such as: Make, model, and warranty information for all your computers and other peripherals Account names and passwords (for e-mail, ISP, wireless routers, wireless networks, admin accounts, System BIOS) Network settings (IP addresses of all PCs, firewall rules, domain info, server names) Software license information (list of installed software, license keys for re-installation, version info) Support phone numbers (for ISP, PC manufacturer, network administrators, tech support) 3. Plan for Extended Downtime If you're a network administrator you'll need to have a plan that covers what you will do if the downtime from the disaster is expected to last more than a few days. You'll need to identify possible alternate sites to house your servers if your facilities are going to be unusable for an extended period of time. Check with your management prior to looking into alternatives to get their buy-in. Ask them questions such as: How much downtime is tolerable to them based on their business needs? What is the restoration priority (which systems do they want back online first)? What is their budget for disaster recovery operations and preparation? 4. Plan for Getting Back to Normal You'll need transition plan for moving your files off of the loaner you borrowed and onto the new PC you bought with your insurance check, or for moving from your alternate site back to your original server room after its been restored to normal. Test and update your DRP regularly. Make sure you keep your DRP up-to-date with all the latest information (updated points of contact, software version information, etc). Check your backup media to make sure it is actually backing something up and not just sitting idle. Check the logs to make sure the backups are running on the schedule you setup. Again, your disaster recovery plan shouldn't be overly complicated. You want to make it useful and something that is always within arms reach. Keep a copy of it off-site as well. Now if I were you, I would go start backing up those baby pics ASAP!

    Blog Entry, DATA, Data Recovery