• Posted on January 29, 2017 11:30 am
    Joseph Forbes
    No comments

    In wireless networking, dual band equipment is capable of transmitting in either of two different standard frequency ranges.  Modern Wi-Fi home networks feature dual band broadband routers that support both 2.4 GHz and 5 GHz channels. The History of Dual Band Wireless Routers First generation home network routers produced during the late 1990s and early 2000s contained a single 802.11b Wi-Fi radio operating on the 2.4 GHz band. At the same time, a significant number of business networks supported 802.11a (5 GHz) devices. The first dual band Wi-Fi routers were built to support mixed networks having both 802.11a and 802.11b clients. Starting with 802.11n, Wi-Fi standards began including simultaneous dual band 2.4 GHz and 5 GHz support as a standard feature. Two Examples of Dual Band Wireless Routers The TP-LINK Archer C7 AC1750 Dual Band Wireless AC Gigabit Router (buy on Amazon.com) has 450Mbps at 2.4GHz and 1300Mbps at 5GHz, as well as IP-based bandwidth control so you can monitor the bandwidth of all the devices connected to your router. The NETGEAR N750 Dual Band Wi-Fi Gigabit Router (buy on Amazon.com) is for medium to large-sized homes and also comes with a genie app, so you can keep tabs on your network and get help troubleshooting if any repairs are needed. Dual Band Wi-Fi Adapters Dual-band Wi-Fi network adapters contain both 2.4 GHz and 5 GHz wireless radios similar to dual-band routers. In the early days of Wi-Fi, some laptop Wi-Fi adapters supported both 802.11a and 802.11b/g radios so that a person could connect their computer to business networks during the workday and home networks on nights and weekends. Newer 802.11n and 802.11ac adapters can also be configured to use either band (but not both at the same time). Dual Band Phones Similar to dual band wireless network equipment, some cell phones also use two (or more) bands for cellular communications separate from Wi-Fi. Dual band phones were originally created to support 3G GPRS or EDGE data services on 0.85 GHz, 0.9 GHz or 1.9 GHz radio frequencies. Phones sometimes support tri band (three) or quad band (four) different cellular transmission frequency ranges in order to maximize compatibility with different kinds of phone network, helpful while roaming or traveling. Cell modems switch between different bands but do not support simultaneous dual band connections. Benefits of Dual Band Wireless Networking By supplying separate wireless interfaces for each band, dual band 802.11n and 802.11ac routers provide maximum flexibility in setting up a home network. Some home devices require the legacy compatibility and greater signal reach that 2.4 GHz offers while others may require the additional network bandwidth that 5 GHz offers: Dual-band routers provide connections designed for the needs of each. Many Wi-Fi home networks suffer from wireless interference due to the prevalence of 2.4 GHz consumer gadgets. The ability to utilize 5 GHz on a dual band router helps avoid these issues. Dual band routers also incorporate Multiple-In Multiple-Out (MIMO) radio configurations. The combination of multiple radios on one band together with dual-band support together provide much higher performance home networking than what single band routers can offer.

    Blog Entry, Hardware, Internet
  • Posted on January 28, 2017 11:46 am
    Joseph Forbes
    No comments

    Private Branch Exchange Explained A PBX (Private Branch Exchange) is a system that allows an organization to manage incoming and outgoing phone calls and also allows communication internally within the organization. A PBX is made up of both hardware and software and connects to communication devices like telephone adapters, hubs, switches, routers and of course, telephone sets. The most recent PBXs have a wealth of very interesting features that make communication easy and more powerful within for organizations, and contributes in making them more efficient and in boosting productivity. Their sizes and complexity vary, ranging from very expensive and complex corporate communication systems to basic plans that are hosted on the cloud for a two-digit monthly fee. You can also have simple PBX systems at home with basic features as an upgrade to your existing traditional phone line. What Does a PBX Do?  As mentioned above, the functions of a PBX can be very complex, but basically, when you talk about PBX, you talk about stuff that does these things: Use of more than one telephone line in an organization, and management of outgoing and incoming calls. Splitting of one single phone line into several internal lines, which are identified through three or four-digit numbers called extensions, and switching calls to the appropriate internal line. This saves the organization from having to pay for several lines, and allows all departments to be reached through one single phone number. Allow free phone communication within the organization. Empower the whole communication with VoIP (Voice over IP), which has a tremendous amount of features and enhancements over traditional telephony, the most prominent being the cutting down of call costs. Ensure good interface with customers through features like call recording, voicemail, IVR etc. Automation of response to calling customers with IVR (interactive voice response) whereby the system can automatically direct users to the most appropriate line through voice menus. It is the kind of feature where, as a caller, you hear things like "Press 1 for the Finance Depart, Press 2 for complaints..." The IP-PBX PBXes changed a lot with the advent of IP telephony or VoIP. After the analog PBXes that worked only on the telephone line and switches, we now have IP-PBXes, which use VoIP technology and IP networks like the Internet to channel calls. IP PBxes are normally preferred due to wealth of features that they come with. With the exception of old already-existing but still-working-fine PBXes, and those chosen because cheap, most PBX systems used nowadays tend to be IP PBXes. The Hosted PBX You do not always have to invest on the hardware, software, installation and maintenance of your in-house PBX, especially if you are running a small business and the cost of ownership prohibits you from benefiting from those important features. There are numerous companies online that offer you the PBX service against a monthly fee without you having anything but your telephone sets and router. These are called hosted PBX services and work on the cloud. The service is dispensed through the Internet. Hosted PBXes have the disadvantage of being generic such that they cannot be tailored to your needs, but they are quite cheap and do not require any upfront investment.

    Blog Entry, Hardware, KnowledgeBase (KB)
  • Posted on December 31, 2016 10:00 pm
    Joseph Forbes
    No comments

    I am familiar with Linux on my personal computer yet I have no experience in administrating systems. As a side job to university I applied to a job that requires some skills in administration and I wonder what skills I should work on for the interview.  -Intern Geek From a Red Hat Certified Systems Administrator experience... Understand and use essential tools for handling files, directories, command-line environments, and documentation Operate running systems, including booting into different run levels, identifying processes, starting and stopping virtual machines, and controlling services Configure local storage using partitions and logical volumes Create and configure file systems and file system attributes, such as permissions, encryption, access control lists, and network file systems Deploy, configure, and maintain systems, including software installation, update, and core services Manage users and groups, including use of a centralized directory for authentication Manage security, including basic firewall and SELinux configuration Basic Knowledge List for Linux Administrator : Basic commands (listing, cat, removing files/dir, nano, vi editor, more, tail etc ) LVM User Authentication (user add, remove, lock, unlock, home dir, ssh access) Group Administration (group add, remove, sudo group) Using SSH, Disabling root for ssh access & using sudo Installing programs using rpm & yum (configuring yum) Configuring a network (basics) Types of process & Managing process (basics) Kill signals Backup and restore Types of Run levels NFS, Samba, NIS (basics) Cron and At Jobs FTP & using Filezilla (easy) Booting process Kernel Parameters (basics) Top, Nice values Services Scripting (very basic) To clear a interview, I suggest to polish your skills depending on the company For Example If the company runs Web or DB Servers, then you should be familiar with installing LAMP Server and phpMyAdmin, mail servers and configuring them (which is absolutely very easy) & hardening of a server (basics) (Tutorials: Tutorials | DigitalOcean , Hardening : How to secure an Ubuntu 16.04 LTS server - Part 1 The Basics) If you land on some big company then your first work will be monitoring system performance with maintaining a running environment. Be able to add additional systems And at last learn some basics of Ubuntu commands (commands for installing packages in Ubuntu are different when compared to Red-hat but with the basics understood, you'll be able to admin most common environments.).  Throughout the 2017 year I will be posting follow up details to Linux training. Eventually I'll have some webinars for average user training in Linux.

    Blog Entry, EDUCATION, JOBS
  • Posted on December 10, 2016 1:54 pm
    Joseph Forbes
    No comments

    An IP address, short for Internet Protocol address, is an identifying number for a piece of network hardware. Having an IP address allows a device to communicate with other devices over an IP-based network. What is an IP Address Used For? An IP address provides an identity to a networked device. Similar to a home or business address supplying that specific physical location with an identifiable address, devices on a network are differentiated from one another through IP addresses. If I'm going to send a package to my friend in another country, I have to know the exact destination. It's not enough to just put a package with his name on it through the mail and expect it to reach him. I must instead attach a specific address to it, which you could do by looking it up in a phone book. This same general process is used when sending data over the Internet. However, instead of using a phone book to look up someone's name to find their physical address, your computer uses DNS servers to look up a hostname to find its IP address. For example, when I enter a website, like www.about.com, into my browser, my request to load that page is sent to DNS servers that look up that hostname (about.com) to find its corresponding IP address (207.241.148.80). Without the IP address attached, my computer will have no clue what it is that I'm after. Different Types of IP Addresses Even if you've heard of the IP addresses before, you may not realize that there are specific types of IP addresses. While all IP addresses are made up of numbers or letters, not all addresses are used for the same purpose. There are private IP addresses, public IP addresses, static IP addresses, and dynamic IP addresses. That's quite a variety! Following those links will give you much more information on what they each mean. To add to the complexity, each type of IP address can be an IPv4 address or an IPv6 address... more on these at the bottom of this page. In short, private IP addresses are used "inside" a network, like the one you probably run at home. These types of IP addresses are used to provide a way for your devices to communicate with your router and all the other devices in your private network. Private IP addresses can be set manually or assigned automatically by your router. Public IP addresses are used on the "outside" of your network and are assigned by your ISP. It's the main address that your home or business network uses to communicate with the rest of the networked devices around the world (i.e. the Internet). It provides a way for the devices in your home, for example, to reach your ISP, and therefore the outside world, allowing them to do things like access websites and communicate directly with other people's computers. Both private IP addresses and public IP addresses are either dynamic or static, which means that, respectively, they either change or they don't. An IP address that is assigned by a DHCP server is a dynamic IP address. If a device does not have DHCP enabled or does not support it then the IP address must be assigned manually, in which case the IP address is called a static IP address. How To Find Your IP Address Different devices and operating systems require unique steps to find the IP address. There are also different steps to take if you're looking for the public IP address provided to you by your ISP, or if you need to see the private IP address that your router handed out. Finding Your Public IP Address There are lots of ways to find your router's public IP address but sites like IP Chicken, WhatsMyIP.org, or WhatIsMyIPAddress.com make this super easy. These sites work on any network-connected device that supports a web browser, like your smartphone, iPod, laptop, desktop, tablet, etc. Finding the private IP address of the specific device you're on isn't as simple... Finding Your Private IP Address In Windows, you can find your device's IP address via the Command Prompt, using the ipconfig command. Tip: See How Do I Find My Default Gateway IP Address? if you need to find the IP address of your router, or whatever device that your network uses to access the public Internet. Linux users can launch a terminal window and enter the command hostname -I(that's a capital "i"), ifconfig, or ip addr show. For Mac OS X, use the command ifconfig to find your local IP address. iPhone, iPad, and iPod touch devices show their private IP address through the Settings app in the Wi-Fi menu. Tap the small "i" button next to the network it's connected to. Depending on whether the IP address was assigned via DHCP or was entered manually will determine which tab (DHCP or Static) you need to choose to see it. You can see the local IP address of an Android device through Settings > Wireless Controls > Wi-Fi settings. Just tap on the network you're on to see a new window that shows network information that includes the private IP address. IP Versions (IPv4 vs IPv6) There are two versions of IP: IPv4 and IPv6. If you've heard of these terms, you probably know that the former is the older, and now outdated, version while IPv6 is the upgraded IP version. One reason IPv6 is replacing IPv4 is that it can provide a much larger number of IP addresses than IPv4 allows. With all the devices we have constantly connected to the Internet, it's important that there's a unique address available for each one of them. The way IPv4 addresses are constructed means it's able to provide over 4 billion unique IP addresses (232). While this is a very large number of addresses, it's just not enough for the modern world with all the different devices people are using on the Internet. Think about it - there are several billion people on earth. Even if everyone in the planet had just one device they used to access the Internet, IPv4 would still be insufficient to provide an IP address for all of them. IPv6, on the other hand, supports a whopping 340 trillion, trillion, trillion addresses (2128). That's 340 with 12 zero's! This means every person on earth could connect billions of devices to the Internet. True, a bit overkill, but you can see how effectively IPv6 solves this problem. Visualizing this helps understand just how many more IP addresses the IPv6 addressing scheme allows over IPv4. Pretend a postage stamp could provide enough space to hold each and every IPv4 address. IPv6, then, to scale, would need the entire solar system to contain all of its addresses. In addition to the greater supply of IP addresses over IPv4, IPv6 has the added benefit of no more IP address collisions caused by private addresses, auto-configuration, no reason for Network Address Translation (NAT), more efficient routing, easier administration, built-in privacy, and more. IPv4 displays addresses as a 32-bit numerical number written in decimal format, like 207.241.148.80 or 192.168.1.1. Because there are trillions of possible IPv6 addresses, they must be written in hexadecimal to display them, like 3ffe:1900:4545:3:200:f8ff:fe21:67cf.

    Blog Entry, KnowledgeBase (KB), Technicals
  • Posted on December 10, 2016 11:50 am
    Joseph Forbes
    No comments

    192.168.1.101, 192.168.1.102 and 192.168.1.103 are all part of an IP address range typically used on home computer networks. They are most commonly found in homes using Linksys broadband routers, but the same addresses can also be used with other home routers and also with other kinds of private networks. How Home Routers Use the 192.168.1.x IP Address Range Home routers by default define a range of IP addresses to be assigned to client devices via DHCP.   Routers that use 192.168.1.1 as their network gateway address typically assign DHCP addresses starting with 192.168.1.100. It means that 192.168.1.101 will be the second such address in line to be assigned, 192.168.1.102 the third, 192.168.1.103 the fourth, and so on. While DHCP does not require addresses to be assigned in sequential order like this, it is the normal behavior. Consider the following example for a Wi-Fi home network: The home administrator uses a PC to initially set up the router and home network. The PC gets assigned IP address 192.168.1.100 by the router. A second PC is added to the network next. This PC receives 192.168.1.101. A game console then joins the network. It receives 192.168.1.102. A phone connects to the router via Wi-Fi, receiving 192.168.1.103. Assigned addresses can be swapped over time. In the above example, if both the game console and phone are disconnected from the network for an extended period of time, their addresses return to the DHCP pool and could be re-assigned in the opposite order depending on which device re-connects first. 192.168.1.101 is a private (also called "non-routable") IP address. It means computers on the Internet (or other remote networks) cannot communicate with that address directly without the assistance of intermediate routers. Messages from a home network router pertaining to 192.168.1.101 refer to one of the local computers and not an outside device. Configuring the 192.168.1.x IP Address Range Any home network or other private network can use this same 192.168.1.x IP address range even if the router uses different settings by default. To set up a router for this specific range: log into the router as administrator navigate to the router's IP and DHCP settings; the location varies depending on type of router but is often located on a Setup menu. set the router's local IP address to be 192.168.1.1 or other 192.168.1.x value; 'x' should be a sufficiently low number to allow address space for clients. set the DHCP starting IP address to be 192.168.1.x+1 - for example, if the router's IP address is chosen to be 192.168.1.101, then the starting IP address for clients can be 192.168.1.102.

    Blog Entry, KnowledgeBase (KB), Technicals
  • Posted on December 10, 2016 11:48 am
    Joseph Forbes
    No comments

    192.168.1.100 is the beginning of the default dynamic IP address range for some Linksys home broadband routers. It is a private IP address that can be also assigned to any device on a local network configured to use this address range. Working with 192.168.1.100 on Linksys Routers Many Linksys routers set 192.168.1.1 as their default local address, then define a range (pool) of IP addresses made available to client devices dynamically via DHCP. Home network administrators can view and update these settings through the router console. Some Linksys router consoles support a configuration setting called "Starting IP Address" that defines which IP address is the first one in the pool that DHCP will allocate from. The first computer (or another device) a person connects to one of these routers will typically be assigned this address. While 192.168.1.100 is often the default for this setting, administrators are free to change it to a different address, like 192.168.1.2 for example. Even if 192.168.1.100 is not chosen as the start address, it can still belong to the DHCP address pool. Linksys routers allow administrators to specify the size of pool and another setting called a subnet mask that together determine the range of addresses allowed on the local network. Working with 192.168.1.100 on Private Networks Any private network, whether a home or business network, can use 192.168.1.100 no matter the type of router involved. It can be part of a DHCP pool or set as a static IP address, The device assigned 192.168.1.100 can change when a network uses DHCP but does not change when using static addressing. Run a ping test from any other computer on the network to determine whether 192.168.1.100 is assigned to a device currently connected. A router's console also displays the list of DHCP addresses it has assigned (some of which may belong to devices currently offline). 192.168.1.100 is a private IPv4 network address meaning that ping tests or any other connection from the Internet or other outside networks cannot be made directly. Traffic for these devices passes through the router and must be initiated by the local device.  A network client does not gain improved performance or better security from having 192.168.1.100 as their address compared to any other private address. Issues with 192.168.1.100 Administrators should avoid manually assigning this address to any device when it belongs to a router's DHCP address range. Otherwise, IP address conflicts can result as the router can assign this address to a different device than the one already using it.

    Blog Entry, KnowledgeBase (KB), Technicals
  • Posted on December 10, 2016 11:47 am
    Joseph Forbes
    No comments

    Definition: The IP address 192.168.1.254 is the default for certain home broadband routers and broadband modems, including: Some 3Com OfficeConnect routers Netopia / Cayman Internet gateways Billion ADSL routers Linksys SRW2024 managed switches Westell modems for Bellsouth / AT&T DSL Internet service in the U.S. This address is set by the manufacturer at the factory, but you can change it at any time using the vendor's console management software. Entering 'http://192.168.1.254' (and not 'www.192.168.1.254') into a Web browser's address bar enables access to the router's console. 192.168.1.254 is a private IPv4 network address. Any device on a local network can be set to use it. As with any such address, however, only one device on the network should use 192.168.1.254 at a time to avoid IP address conflicts.

    Blog Entry, KnowledgeBase (KB), Technicals
  • Posted on December 10, 2016 11:45 am
    Joseph Forbes
    No comments

    The IP address 192.168.0.1 is the default for certain home broadband routers, principally various D-Link and Netgear models. This address is set by the manufacturer at the factory, but you can change it at any time using the network router's administrative console. 192.168.0.1 is a private IPv4 network address. Home routers can use it to establish the default gateway. On such routers, you can access its administrative console by pointing a Web browser to: http://192.168.0.1 Any brand of router, or any computer on a local network for that matter, can be set to use this address or a comparable private IPv4 address. As with any IP address, only one device on the network should use 192.168.0.1 to avoid address conflicts.

    KnowledgeBase (KB), Technicals
  • Posted on December 10, 2016 5:51 am
    Joseph Forbes
    No comments

    10.0.0.1 is an IP address found on many local computer networks, particularly business networks. Business class network routers assigned 10.0.0.1 as their local gateway address typically are configured to support a subnet with client IP addresses starting at 10.0.0.2. This same address is also the default local address for certain models of home broadband routers from Zoom, Edimax, Siemens and Micronet. Why 10.0.0.2 is Popular Internet Protocol (IP) version 4 defines certain sets of IP addresses as restricted for private use (not available to be assigned to Web servers or other Internet hosts). The first and largest of these private IP address ranges begins with 10.0.0.0. Corporate networks wanting flexibility in allocating large number of IP addresses naturally gravitated to using the 10.0.0.0 network as their default with 10.0.0.2 as one of the first addresses allocated from that range. Automatic Assignment of 10.0.0.2 Computers and other devices that support DHCP can receive their IP address automatically from a router. The router decides which address to assign from the range (called a DHCP pool) it is set up to manage. Normally routers will assign these pooled addresses in sequential order (though the order is not guaranteed). Therefore, 10.0.0.2 is most commonly the address given to the first client on a local network that connects to the router based at 10.0.0.1. Manual Assignment of 10.0.0.2 Most modern network devices including computers and game consoles allow their IP address to be set manually. The text "10.0.0.2" or the four sets of digits - 10, 0, 0, and  2 - must be keyed into a network setting configuration screen on the device. However, simply entering these numbers does not guarantee it is a valid address for that device to use: The local router must also be configured to include 10.0.0.2 in its supported address range. Working with 10.0.0.2 Users can access the administration screens of routers using 10.0.0.2 by pointing a Web browser to http://10.0.0.2/ Most networks assign private IP addresses like 10.0.0.2 dynamically using DHCP. Attempting to assign it to a device manually (a process called "fixed" or static IP address assignment) is also possible but not recommended due to the risk of IP address conflicts. Routers cannot always recognize whether a given address in their pool has already been assigned to a client manually before assigning it automatically. In the worst case, two different devices on the network will both be assigned 10.0.0.2, resulting in failed connection issues for both.

    Blog Entry, KnowledgeBase (KB), Technicals
  • Posted on July 4, 2016 9:30 am
    Joseph Forbes
    No comments

    The evolution of the modern graphics processor begins with the introduction of the first 3D add-in cards in 1995, followed by the widespread adoption of the 32-bit operating systems and the affordable personal computer. The graphics industry that existed before that largely consisted of a more prosaic 2D, non-PC architecture, with graphics boards better known by their chip’s alphanumeric naming conventions and their huge price tags. 3D gaming and virtualization PC graphics eventually coalesced from sources as diverse as arcade and console gaming, military, robotics and space simulators, as well as medical imaging. The early days of 3D consumer graphics were a Wild West of competing ideas. From how to implement the hardware, to the use of different rendering techniques and their application and data interfaces, as well as the persistent naming hyperbole. The early graphics systems featured a fixed function pipeline (FFP), and an architecture following a very rigid processing path utilizing almost as many graphics APIs as there were 3D chip makers. While 3D graphics turned a fairly dull PC industry into a light and magic show, they owe their existence to generations of innovative endeavour. Over the next few weeks (this is the first installment on a series of four articles) we'll be taking an extensive look at the history of the GPU, going from the early days of 3D consumer graphics, to the 3Dfx Voodoo game-changer, the industry's consolidation at the turn of the century, and today's modern GPGPU. 1976 - 1995: The Early Days of 3D Consumer Graphics The first true 3D graphics started with early display controllers, known as video shifters and video address generators. They acted as a pass-through between the main processor and the display. The incoming data stream was converted into serial bitmapped video output such as luminance, color, as well as vertical and horizontal composite sync, which kept the line of pixels in a display generation and synchronized each successive line along with the blanking interval (the time between ending one scan line and starting the next). A flurry of designs arrived in the latter half of the 1970s, laying the foundation for 3D graphics as we know them. Atari 2600 released in September 1977 RCA’s “Pixie” video chip (CDP1861) in 1976, for instance, was capable of outputting a NTSC compatible video signal at 62x128 resolution, or 64x32 for the ill-fated RCA Studio II console. The video chip was quickly followed a year later by the Television Interface Adapter (TIA) 1A, which was integrated into the Atari 2600 for generating the screen display, sound effects, and reading input controllers. Development of the TIA was led by Jay Miner, who also led the design of the custom chips for the Commodore Amiga computer later on. In 1978, Motorola unveiled the MC6845 video address generator. This became the basis for the IBM PC’s Monochrome and Color Display Adapter (MDA/CDA) cards of 1981, and provided the same functionality for the Apple II. Motorola added the MC6847 video display generator later the same year, which made its way into a number of first generation personal computers, including the Tandy TRS-80. IBM PC’s Monochrome Display Adapter A similar solution from Commodore’s MOS Tech subsidiary, the VIC, provided graphics output for 1980-83 vintage Commodore home computers. In November the following year, LSI’s ANTIC (Alphanumeric Television Interface Controller) and CTIA/GTIA co-processor (Color or Graphics Television Interface Adaptor), debuted in the Atari 400. ANTIC processed 2D display instructions using direct memory access (DMA). Like most video co-processors, it could generate playfield graphics (background, title screens, scoring display), while the CTIA generated colors and moveable objects. Yamaha and Texas Instruments supplied similar IC’s to a variety of early home computer vendors. The next steps in the graphics evolution were primarily in the professional fields. Intel used their 82720 graphics chip as the basis for the $1000 iSBX 275 Video Graphics Controller Multimode Board. It was capable of displaying eight color data at a resolution of 256x256 (or monochrome at 512x512). Its 32KB of display memory was sufficient to draw lines, arcs, circles, rectangles and character bitmaps. The chip also had provision for zooming, screen partitioning and scrolling. SGI quickly followed up with their IRIS Graphics for workstations -- a GR1.x graphics board with provision for separate add-in (daughter) boards for color options, geometry, Z-buffer and Overlay/Underlay. Intel's $1000 iSBX 275 Video Graphics Controller Multimode Board was capable of displaying eight color data at a resolution of 256x256 (or monochrome at 512x512). Industrial and military 3D virtualization was relatively well developed at the time. IBM, General Electric and Martin Marietta (who were to buy GE’s aerospace division in 1992), along with a slew of military contractors, technology institutes and NASA ran various projects that required the technology for military and space simulations. The Navy also developed a flight simulator using 3D virtualization from MIT’s Whirlwind computer in 1951. Besides defence contractors there were companies that straddled military markets with professional graphics. Evans & Sutherland – who were to provide professional graphics card series such as the Freedom and REALimage – also provided graphics for the CT5 flight simulator, a $20 million package driven by a DEC PDP-11 mainframe. Ivan Sutherland, the company’s co-founder, developed a computer program in 1961 called Sketchpad, which allowed drawing geometric shapes and displaying on a CRT in real-time using a light pen. This was the progenitor of the modern Graphic User Interface (GUI). In the less esoteric field of personal computing, Chips and Technologies’ 82C43x series of EGA (Extended Graphics Adapter), provided much needed competition to IBM’s adapters, and could be found installed in many PC/AT clones around 1985. The year was noteworthy for the Commodore Amiga as well, which shipped with the OCS chipset. The chipset comprised of three main component chips -- Agnus, Denise, and Paula -- which allowed a certain amount of graphics and audio calculation to be non-CPU dependent. In August of 1985, three Hong Kong immigrants, Kwok Yuan Ho, Lee Lau and Benny Lau, formed Array Technology Inc in Canada. By the end of the year, the name had changed to ATI Technologies Inc. ATI got their first product out the following year, the OEM Color Emulation Card. It was used for outputting monochrome green, amber or white phosphor text against a black background to a TTL monitor via a 9-pin DE-9 connector. The card came equipped with a minimum of 16KB of memory and was responsible for a large percentage of ATI’s CAD$10 million in sales in the company’s first year of operation. This was largely done through a contract that supplied around 7000 chips a week to Commodore Computers. ATI's Color Emulation Card came with a minimum 16KB of memory and was responsible for a large part of the company’s CAD$10 million in sales the first year of operation. The advent of color monitors and the lack of a standard among the array of competitors ultimately led to the formation of the Video Electronics Standards Association (VESA), of which ATI was a founding member, along with NEC and six other graphics adapter manufacturers. In 1987 ATI added the Graphics Solution Plus series to its product line for OEM’s, which used IBM’s PC/XT ISA 8-bit bus for Intel 8086/8088 based IBM PC’s. The chip supported MDA, CGA and EGA graphics modes via dip switches. It was basically a clone of the Plantronics Colorplus board, but with room for 64kb of memory. Paradise Systems’ PEGA1, 1a, and 2a (256kB) released in 1987 were Plantronics clones as well. ATI EGA 800: 16-color VGA emulation, 800x600 support The EGA Wonder series 1 to 4 arrived in March for $399, featuring 256KB of DRAM as well as compatibility with CGA, EGA and MDA emulation with up to 640x350 and 16 colors. Extended EGA was available for the series 2,3 and 4. Filling out the high end was the EGA Wonder 800 with 16-color VGA emulation and 800x600 resolution support, and the VGA Improved Performance (VIP) card, which was basically an EGA Wonder with a digital-to-analog (DAC) added to provide limited VGA compatibility. The latter cost $449 plus $99 for the Compaq expansion module. ATI was far from being alone riding the wave of consumer appetite for personal computing. Many new companies and products arrived that year.. Among them were Trident, SiS, Tamerack, Realtek, Oak Technology, LSI’s G-2 Inc., Hualon, Cornerstone Imaging and Winbond -- all formed in 1986-87. Meanwhile, companies such as AMD, Western Digital/Paradise Systems, Intergraph, Cirrus Logic, Texas Instruments, Gemini and Genoa, would produce their first graphics products during this timeframe. ATI’s Wonder series continued to gain prodigious updates over the next few years. In 1988, the Small Wonder Graphics Solution with game controller port and composite out options became available (for CGA and MDA emulation), as well as the EGA Wonder 480 and 800+ with Extended EGA and 16-bit VGA support, and also the VGA Wonder and Wonder 16 with added VGA and SVGA support. A Wonder 16 was equipped with 256KB of memory retailed for $499, while a 512KB variant cost $699. An updated VGA Wonder/Wonder 16 series arrived in 1989, including the reduced cost VGA Edge 16 (Wonder 1024 series). New features included a bus-Mouse port and support for the VESA Feature Connector. This was a gold-fingered connector similar to a shortened data bus slot connector, and it linked via a ribbon cable to another video controller to bypass a congested data bus. The Wonder series updates continued to move apace in 1991. The Wonder XL card added VESA 32K color compatibility and a Sierra RAMDAC, which boosted maximum display resolution to 640x480 @ 72Hz or 800x600 @ 60Hz. Prices ranged through $249 (256KB), $349 (512KB), and $399 for the 1MB RAM option. A reduced cost version called the VGA Charger, based on the previous year’s Basic-16, was also made available. ATI Graphics Ultra ISA (Mach8 + VGA) ATI added a variation of the Wonder XL that incorporated a Creative Sound Blaster 1.5 chip on an extended PCB. Known as the VGA Stereo-F/X, it was capable of simulating stereo from Sound Blaster mono files at something approximating FM radio quality. The Mach series launched with the Mach8 in May of that year. It sold as either a chip or board that allowed, via a programming interface (AI), the offloading of limited 2D drawing operations such as line-draw, color-fill and bitmap combination (Bit BLIT). Graphics boards such as the ATI VGAWonder GT, offered a 2D + 3D option, combining the Mach8 with the graphics core (28800-2) of the VGA Wonder+ for its 3D duties. The Wonder and Mach8 pushed ATI through the CAD$100 million sales milestone for the year, largely on the back of Windows 3.0’s adoption and the increased 2D workloads that could be employed with it. S3 Graphics was formed in early 1989 and produced its first 2D accelerator chip and a graphics card eighteen months later, the S3 911 (or 86C911). Key specs for the latter included 1MB of VRAM and 16-bit color support. The S3 911 was superseded by the 924 that same year -- it was basically a revised 911 with 24-bit color -- and again updated the following year with the 928 which added 32-bit color, and the 801 and 805 accelerators. The 801 used an ISA interface, while the 805 used VLB. Between the 911’s introduction and the advent of the 3D accelerator, the market was flooded with 2D GUI designs based on S3’s original -- notably from Tseng labs, Cirrus Logic, Trident, IIT, ATI’s Mach32 and Matrox’s MAGIC RGB. In January 1992, Silicon Graphics Inc (SGI) released OpenGL 1.0, a multi-platform vendor agnostic application programming interface (API) for both 2D and 3D graphics. Microsoft was developing a rival API of their own called Direct3D and didn’t exactly break a sweat making sure OpenGL ran as well as it could under Windows. OpenGL evolved from SGI’s proprietary API, called the IRIS GL (Integrated Raster Imaging System Graphical Library). It was an initiative to keep non-graphical functionality from IRIS, and allow the API to run on non-SGI systems, as rival vendors were starting to loom on the horizon with their own proprietary APIs. Initially, OpenGL was aimed at the professional UNIX based markets, but with developer-friendly support for extension implementation it was quickly adopted for 3D gaming. Microsoft was developing a rival API of their own called Direct3D and didn’t exactly break a sweat making sure OpenGL ran as well as it could under the new Windows operating systems. Things came to a head a few years later when John Carmack of id Software, whose previously released Doom had revolutionized PC gaming, ported Quake to use OpenGL on Windows and openly criticised Direct3D. Fast forward: GLQuake released in 1997 versus original Quake Microsoft’s intransigence increased as they denied licensing of OpenGL’s Mini-Client Driver (MCD) on Windows 95, which would allow vendors to choose which features would have access to hardware acceleration. SGI replied by developing the Installable Client Driver (ICD), which not only provided the same ability, but did so even better since MCD covered rasterisation only and ICD added lighting and transform functionality (T&L). During the rise of OpenGL, which initially gained traction in the workstation arena, Microsoft was busy eyeing the emerging gaming market with designs on their own proprietary API. They acquired RenderMorphics in February 1995, whose Reality Lab API was gaining traction with developers and became the core for Direct3D. At about the same time, 3dfx’s Brian Hook was writing the Glide API that was to become the dominant API for gaming. This was in part due to Microsoft’s involvement with the Talisman project (a tile based rendering ecosystem), which diluted the resources intended for DirectX. As D3D became widely available on the back of Windows adoption, proprietary APIs such as S3d (S3), Matrox Simple Interface, Creative Graphics Library, C Interface (ATI), SGL (PowerVR), NVLIB (Nvidia), RRedline (Rendition) and Glide, began to lose favor with developers. It didn’t help matters that some of these proprietary APIs were allied with board manufacturers under increasing pressure to add to a rapidly expanding feature list. This included higher screen resolutions, increased color depth (from 16-bit to 24 and then 32), and image quality enhancements such as anti-aliasing. All of these features called for increased bandwidth, graphics efficiency and faster product cycles. By 1993, market volatility had already forced a number of graphics companies to withdraw from the business, or to be absorbed by competitors. The year 1993 ushered in a flurry of new graphics competitors, most notably Nvidia, founded in January of that year by Jen-Hsun Huang, Curtis Priem and Chris Malachowsky. Huang was previously the Director of Coreware at LSI while Priem and Malachowsky both came from Sun Microsystems where they had previously developed the SunSPARC-based GX graphics architecture. Fellow newcomers Dynamic Pictures, ARK Logic, and Rendition joined Nvidia shortly thereafter. Market volatility had already forced a number of graphics companies to withdraw from the business, or to be absorbed by competitors. Amongst them were Tamerack, Gemini Technology, Genoa Systems, Hualon, Headland Technology (bought by SPEA), Acer, Motorola and Acumos (bought by Cirrus Logic). One company that was moving from strength to strength however was ATI. As a forerunner of the All-In-Wonder series, late November saw the announcement of ATI’s 68890 PC TV decoder chip which debuted inside the Video-It! card. The chip was able to capture video at 320x240 @ 15 fps, or 160x120 @ 30 fps, as well as compress/decompress in real time thanks to the onboard Intel i750PD VCP (Video Compression Processor). It was also able to communicate with the graphics board via the data bus, thus negating the need for dongles or ports and ribbon cables. The Video-It! retailed for $399, while a lesser featured model named Video-Basic completed the line-up. Five months later, in March, ATI belatedly introduced a 64-bit accelerator; the Mach64. The financial year had not been kind to ATI with a CAD$2.7 million loss as it slipped in the marketplace amid strong competition. Rival boards included the S3 Vision 968, which was picked up by many board vendors, and the Trio64 which picked up OEM contracts from Dell (Dimension XPS), Compaq (Presario 7170/7180), AT&T (Globalyst),HP (Vectra VE 4), and DEC (Venturis/Celebris). Vision 968: S3's first motion video accelerator Released in 1995, the Mach64 notched a number of notable firsts. It became the first graphics adapter to be available for PC and Mac computers in the form of the Xclaim ($450 and $650 depending on onboard memory), and, along with S3's Trio, offered full-motion video playback acceleration. The Mach64 also ushered in ATI’s first pro graphics cards, the 3D Pro Turbo and 3D Pro Turbo+PC2TV, priced at a cool $599 for the 2MB option and $899 for the 4MB. ATI Mach64 VT with support for TV tuner The following month saw a technology start-up called 3DLabs rise onto the scene, born when DuPont’s Pixel graphics division bought the subsidiary from its parent company, along with the GLINT 300SX processor capable of OpenGL rendering, fragment processing and rasterisation. Due to their high price the company's cards were initially aimed at the professional market. The Fujitsu Sapphire2SX 4MB retailed for $1600-$2000, while an 8MB ELSA GLoria 8 was $2600-$2850. The 300SX, however, was intended for the gaming market. S3 seemed to be everywhere at that time. The high-end OEM marked was dominated by the company's Trio64 chipsets that integrated DAC, a graphics controller, and clock synthesiser into a single chip. The Gaming GLINT 300SX of 1995 featured a much-reduced 2MB of memory. It used 1MB for textures and Z-buffer and the other for frame buffer, but came with an option to increase the VRAM for Direct3D compatibility for another $50 over the $349 base price. The card failed to make headway in an already crowded marketplace, but 3DLabs was already working on a successor in the Permedia series. S3 seemed to be everywhere at that time. The high-end OEM marked was dominated by the company's Trio64 chipsets that integrated DAC, a graphics controller, and clock synthesiser into a single chip. They also utilized a unified frame buffer and supported hardware video overlay (a dedicated portion of graphics memory for rendering video as the application requires). The Trio64 and its 32-bit memory bus sibling, the Trio32, were available as OEM units and standalone cards from vendors such as Diamond, ELSA, Sparkle, STB, Orchid, Hercules and Number Nine. Diamond Multimedia’s prices ranged from $169 for a ViRGE based card, to $569 for a Trio64+ based Diamond Stealth64 Video with 4MB of VRAM. The mainstream end of the market also included offerings from Trident, a long time OEM supplier of no-frills 2D graphics adapters who had recently added the 9680 chip to its line-up. The chip boasted most of the features of the Trio64 and the boards were generally priced around the $170-200 mark. They offered acceptable 3D performance in that bracket, with good video playback capability. Other newcomers in the mainstream market included Weitek’s Power Player 9130, and Alliance Semiconductor’s ProMotion 6410 (usually seen as the Alaris Matinee or FIS’s OptiViewPro). Both offered excellent scaling with CPU speed, while the latter combined the strong scaling engine with antiblocking circuitry to obtain smooth video playback, which was much better than in previous chips such as the ATI Mach64, Matrox MGA 2064W and S3 Vision968. Nvidia launched their first graphics chip, the NV1, in May, and became the first commercial graphics processor capable of 3D rendering, video acceleration, and integrated GUI acceleration. They partnered with ST Microelectronic to produce the chip on their 500nm process and the latter also promoted the STG2000 version of the chip. Although it was not a huge success, it did represent the first financial return for the company. Unfortunately for Nvidia, just as the first vendor boards started shipping (notably the Diamond Edge 3D) in September, Microsoft finalized and released DirectX 1.0. The D3D graphics API confirmed that it relied upon rendering triangular polygons, where the NV1 used quad texture mapping. Limited D3D compatibility was added via driver to wrap triangles as quadratic surfaces, but a lack of games tailored for the NV1doomed the card as a jack of all trades, master of none. Most of the games were ported from the Sega Saturn. A 4MB NV1 with integrated Saturn ports (two per expansion bracket connected to the card via ribbon cable), retailed for around $450 in September 1995. Microsoft’s late changes and launch of the DirectX SDK left board manufacturers unable to directly access hardware for digital video playback. This meant that virtually all discrete graphics cards had functionality issues in Windows 95. Drivers under Win 3.1 from a variety of companies were generally faultless by contrast. ATI announced their first 3D accelerator chip, the 3D Rage (also known as the Mach 64 GT), in November 1995. The first public demonstration of it came at the E3 video game conference held in Los Angeles in May the following year. The card itself became available a month later. The 3D Rage merged the 2D core of the Mach64 with 3D capability. Late revisions to the DirectX specification meant that the 3D Rage had compatibility problems with many games that used the API -- mainly the lack of depth buffering. With an on-board 2MB EDO RAM frame buffer, 3D modality was limited to 640x480x16-bit or 400x300x32-bit. Attempting 32-bit color at 600x480 generally resulted in onscreen color corruption, and 2D resolution peaked at 1280x1024. If gaming performance was mediocre, the full screen MPEG playback ability at least went some way in balancing the feature set. The performance race was over before it had started, with the 3Dfx Voodoo Graphics effectively annihilating all competition. ATI reworked the chip, and in September the Rage II launched. It rectified the D3DX issues of the first chip in addition to adding MPEG2 playback support. Initial cards, however, still shipped with 2MB of memory, hampering performance and having issues with perspective/geometry transform, As the series was expanded to include the Rage II+DVD and 3D Xpression+, memory capacity options grew to 8MB. While ATI was first to market with a 3D graphics solution, it didn’t take too long for other competitors with differing ideas of 3D implementation to arrive on the scene. Namely, 3dfx, Rendition, and VideoLogic. Screamer 2, released in 1996, running on Windows 95 with 3dfx Voodoo 1 graphics In the race to release new products into the marketplace, 3Dfx Interactive won over Rendition and VideoLogic. The performance race, however, was over before it had started, with the 3Dfx Voodoo Graphics effectively annihilating all competition. This article is the first installment on a series of four. If you enjoyed this, make sure to join us next week as we take a stroll down memory lane to the heyday of 3Dfx, Rendition, Matrox and young company called Nvidia. Part 1: (1976 - 1995) The Early Days of 3D Consumer Graphics Part 2: (1995 - 1999) 3Dfx Voodoo: The Game-changer Part 3: (2000 - 2005) Down to Two: The Graphics Market Consolidation Part 4: (2006 - Present) The Modern GPU: Stream processing units a.k.a. GPGPU

    Blog Entry, ENTERTIANMENT, Hardware
  • Posted on June 25, 2016 11:05 am
    Joseph Forbes
    No comments

    Wondering how fast your Internet connection really is? You'll need to test your Internet speed to find out. There are plenty of ways to do this, some more accurate than others, depending on why you're testing. One common reason to test your Internet speed is to make sure that you're getting whatever Mbps or Gpbs level bandwidth you're paying your ISP for. If your tests show a regularly sluggish connection, your ISP may have an issue and you may have a refund in your future. Another reason to test your Internet speed is to make sure you'll be able to stream high-bandwidth movies, like those from Netflix, Hulu, Amazon, and other providers. If your Internet speed is too slow, you'll get choppy video or regular buffering. Free benchmark tools, like those popular Internet speed tests and bandwidth testing smartphone apps, are the two most common ways to test your high speed Internet but there are others, like service-specific tests, ping and latency tests, DNS speed tests, and more. Below are the three most common scenarios for testing Internet speed, each of which requires a different way of testing Internet speed: You suspect that your ISP or wireless provider isn't giving you the bandwidth you're paying for, either on purpose or because something is wrong. You're very happy (or very sad) with the state of your high-speed Internet and you want to tell the world about it! You want to check the Internet speed between your device and a service you're paying for, like Netflix, HBO GO, etc. Just scroll down until you find the section that you're after. Choosing the right way to test your Internet speed is the first, and easiest, step to make sure the results are as accurate as possible. How to Test Your Internet Speed When You're Sure it's Too Slow Are most web pages taking forever to load? Are those cat videos buffering so much that you can't even enjoy them? If so, especially if this is new behavior, then it's definitely time to check your Internet speed. Here's how to test your Internet speed when you suspect that your fiber, cable, or DSL provider isn't providing you with the bandwidth you're paying for. This is the also method to take with your mobile computer as well, when you think your wireless or hotspot Internet connection is slower than it should be: Locate your ISP's official Internet speed test page from Tim Fisher's ISP-Hosted Internet Speed Tests page. Note: I have almost every major US and Canadian ISP speed test page listed but I may be missing smaller providers. Let me know if your isn't listed and I'll dig it up. Close any other apps, windows, programs, etc. that might be using your Internet connection. If you're at home, where other devices might be using the same connection, disconnect those or turn those off before beginning the test. Follow whatever instructions you're given on screen to test your Internet speed. Tip: A number of ISPs use Flash-based Internet speed tests even though most devices, and more and more browsers, do not support Flash. Choose a non-ISP-hosted test if you have to but know that your ISP might not give as much credit to those results. Log the results of the speed test, ideally with a screenshot. Name the screenshot with the date and time you took the test so it's easy to identify later. Repeat Steps 3 & 4 several times, testing with the same computer or device each time, using the same Internet speed test. Note: For the best results, if your schedule permits, test your Internet speed once in the morning, once in the afternoon, and once in the evening, over the course of several days. If you find that your Internet speed is consistently slower than you're paying for, it's time to take this data to your Internet Service Provider and ask for service to improve your connection. Bandwidth that varies a lot at different times per day, sometimes meeting or exceeding what you're paying for, may have more to do with bandwidth throttling or capacity issues with your ISP than an actual problem. Regardless, it might be time to negotiate the price of your high-speed plan or get a discount on an upgrade. How to Test Your Internet Speed for Fun Generally curious about your Internet speed? If so, an Internet speed test site or smartphone app is a great choice. These tools are easy to use and understand, and are great for bragging to your friends about that new super-fast connection you just signed up for. Here's how to test your Internet speed when you have no specific concern or goal, other than a little gloating... or maybe sympathy: Choose a testing site. Any one will do, even the ISP-hosted ones if you'd rather use one of those. Tip: SpeedOf.Me is one of my favorite speed test sites, doesn't require Flash, lets you share your results on social networks, and is probably more accurate, on average, than more popular tests like Speedtest.net. Follow whatever instructions you're given on screen to test your Internet speed. Most broadband testing services, like both SpeedOf.Me and Speedtest.net, test both your upload and download bandwidth with a single click. Once the test is over, you'll be presented with some kind of test result and some method of sharing, usually via Facebook, Twitter, email, etc. You can often times save these small results images to your own computer, too, which you can use to keep track of your Internet speed over time. Some testing sites save your previous results for you automatically on their servers, too. Testing your Internet speed and sharing the results is especially fun after upgrading. Be the envy of your friends and family everywhere with your 1,245 Mbps download speed you're getting on your new fiber connection! How to Test Your Internet Speed For a Specific Service Curious if Netflix will work great at your home... or why it's suddenly not? Wondering if your Internet connection will support streaming your favorite new shows on HBO GO, Hulu, or Amazon Instant Video? With so many streaming services, and each on a wide variety of devices, all of which are being constantly updated, it'd be impossible to give you simple speed test how-to that covers everything. That said, there is a lot we can talk about it, some of which is very specific to the various popular streaming movie and video services out there. A basic Internet speed test is a good place to start. Even though it's not a true test between your connected television (or tablet, or Roku, or PC, etc.) and the Netflix or Hulu (or wherever) servers, any of the better Internet speed test sites should give you a decent idea of what to expect. Check the device you're using for a built-in connection test. Most "smart" TVs and other dedicated streaming devices include built-in Internet speed tests. These tests, usually located in the Network or Wireless menu areas, are going to be the most accurate way to figure out how much bandwidth is available for their apps. Here are some more specific Internet speed testing and troubleshooting advice for some of the more popular streaming services: Netflix: Check out the Netflix ISP Speed Index report to see what to expect speed-wise, on average, from the various ISPs around the world or Fast.com to test yours right now. Netflix's Internet Connection Speed Recommendations page suggests 5 Mbps for HD (1080p) streaming and 25 Mbps for 4K (2160p) streaming. If you're having trouble, it is possible to set the bandwidth Netflix uses in your account settings. Apple TV: While there's no built-in Internet speed test available on Apple TV devices, Apple does offer extensive Apple TV Playback Performance Troubleshooting via their help page. Apple recommends 8 Mbps for 1080p content and 2.5 Mbps for standard definition stuff. Hulu: In my opinion, the award for the best video streaming troubleshooting page goes to Hulu. Their Streaming Issues with Hulu on your TV page offers device-specific troubleshooting, making solving slow Hulu connections really easy. Hulu suggests 3 Mbps for HD streaming and 1.5 Mbps for SD. Amazon Instant Video: See the Video Issues page on Amazon's site for help that's specific to your device, like your computer, Amazon-branded tablets and devices, and other streaming hardware. Amazon recommends at least 3.5 Mbps for problem-free HD streaming and 900 Kbps for SD. HBO GO: The HBO GO Device Troubleshooting page should help you clear up any major problems. HBO suggests you test your Internet speed with a 3rd party speed test to make sure you're getting the minimum download bandwidth of 3 Mbps they recommend for a buffer-free streaming experience.

    Blog Entry, Cloud Apps, Internet
  • Posted on June 25, 2016 10:00 am
    Joseph Forbes
    No comments

    The term bandwidth has a number of technical meanings but since the popularization of the internet, it has generally referred to the volume of information per unit of time that a transmission medium (like an internet connection) can handle. An internet connection with a larger bandwidth can move a set amount of data (say, a video file) much faster than an internet connection with a lower bandwidth. Bandwidth is typically expressed in bits per second, like 60 Mbps or 60 Mb/s to explain a data transfer rate of 60 million bits (megabits) every second. How Much Bandwidth Do You Have? (& How Much Do You Need?) See How to Test Your Internet Speed for help on how to accurately determine how much bandwidth you have available to you. Internet speed test sites (SpeedTest.net) are often, but not always, the best way to do that. How much bandwidth you need depends on what you plan on doing with your internet connection. For the most part, more is better, constrained of course by your budget. In general, if you plan on doing nothing but Facebook and the occasional video watching, a low-end high-speed plan is probably just fine. If you have a few TVs that will be streaming Netflix, and more than a few computers and devices that might be doing who-knows-what, I'd go with as much as you can afford. You won't be sorry. Bandwidth: A Lot Like Plumbing Plumbing provides a great analogy for bandwidth... seriously! Data is to available bandwidth as water is to the size of the pipe. In other words, as the bandwidth increases, so does the amount of data that can flow through in a given amount of time, just like as the diameter of the pipe increases, so does the amount of water that can flow through during a period of time. Say you're streaming a movie, someone else is playing an online multiplayer video game, and a couple others on your same network are downloading files or using their phones to watch online videos. It's likely that everyone will feel that things are a bit sluggish if not constantly starting and stopping. This has to do with bandwidth. To return to the plumbing analogy, assuming the water pipe to a home (the bandwidth) remains the same size, as the home's faucets and showers are turned on (data downloads to the devices being used), the water pressure at each point (the perceived "speed" at each device) will reduce - again, because there's only so much water (bandwidth) available to the home (your network). Put another way: the bandwidth is at a fixed amount based on what you pay for, so, while one person may be able to stream a high-def video without any lag whatsoever, the moment you begin adding other download requests to the network, each one will gets just their portion of the full capacity. For example, if a speed test identifies my download speed as 7.85 Mbps, it means that given no interruptions or other bandwidth-hogging applications, I could download a 7.85 megabit (or 0.98 megabyte) file in one second. A little math would tell you that at this allowed bandwidth, I could download about 60 MB of information in one minute, or 3,528 MB in an hour, which is equivalent to a 3.5 GB file... pretty close to a full length, DVD-quality movie. So while I could theoretically download a 3.5 GB video file in an hour, if someone else on my network tries to download a similar file at the same time, it would now take two hours to complete the download because again, the network only permits x amount of data to be downloaded at any given time, so it now must allow the other download to use some of that bandwidth too. Technically, the network would now see 3.5 GB + 3.5 GB, for 7 GB of total data that needs downloaded. The bandwidth capacity doesn't change because that's a level you pay your ISP for, so the same concept applies - a 7.85 Mbps network is going to now take two hours to download the 7 GB file just like it would take just one hour to download half that amount. The Difference in Mbps and MBps It's important to understand that bandwidth can be expressed in any unit (bytes, kilobytes, megabytes, gigabits, etc.). Your ISP might use one term, a testing service another, and a video streaming service yet another. You'll need to understand how these terms are all related and how to convert between them if you want to avoid paying for too much internet service or, maybe worse, ordering too little for what you want to do with it. For example, 15 MBs is not the same as 15 Mbs (note the lowercase b). The first reads as 15 megaBYTES while the second is 15 megaBITS. These two values are different by a factor of 8 since there are 8 bits in a byte. If these two bandwidth readings were written in megabytes (MB), they'd be 15 MBs and 1.875 MBs (since 15/8 is 1.875). However, when written in megabits (Mb), the first would be 120 Mbs (15x8 is 120) and the second 15 Mbps. Tip: This same concept applies to any data unit you might encounter. You can use an online conversion calculator like this one if you'd rather not do the math manually. More Information on Bandwidth Some software lets you limit the amount of bandwidth that the program is allowed to use, which is really helpful if you still want the program to function but it doesn't necessarily need to be running over a certain speed. This intentional bandwidth limitation is often called bandwidth control. Some download managers, like Free Download Manager for example, support bandwidth control, as do numerous online backup services and some cloud storage services. These are all services and programs that tend to use massive amounts of bandwidth and so having options that limit their access to it makes sense. Something similar to bandwidth control is bandwidth throttling. This is also a deliberate bandwidth control that's sometimes set by internet service providers to either limit certain types of traffic (like Netflix streaming or file sharing) or to limit all traffic during particular periods of time during the day in order to reduce congestion. Network performance is determined by more than just how much bandwidth you have available. There are also factors like latency, jitter, and packet loss that could be contributing to less-than-desirable performance in any given network. Also check out this original "Warriorsofthe.net" website.  YouTube "Warriorsofthe.net" will show you a general overview of networks.

    Blog Entry, EDUCATION, Internet