Saturday 31 May 2014

Link Aggregation - IEEE 802.1AX-2008 (formerly IEEE 802.3ad) & MC-LAG

Link aggregation is a computer networking term to describe various methods of combining (aggregating) multiple network connections in parallel to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links fail.
Further umbrella terms used to describe the method include port trunking,link bundling,Ethernet/network/NIC bonding, or NIC teaming. These umbrella terms not only encompass vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet defined in IEEE 802.1ax or the previous IEEE 802.3ad, but also various proprietary solutions.

Initial release 802.3ad in 2000

As of 2000 most gigabit channel-bonding schemes use the IEEE standard of Link Aggregation which was formerly clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force.[4] Nearly every network equipment manufacturer quickly adopted this joint standard over their proprietary standards.

Move to 802.1 layer in 2008

David Law noted in 2006 that certain 802.1 layers (such as 802.1X security) were positioned in the protocol stack above Link Aggregation which was defined as an 802.3 sublayer. This discrepancy was resolved with formal transfer of the protocol to the 802.1 group with the publication of IEEE 802.1AX-2008 on 3 November 2008.
Reference Diagram - Link Aggregation between Server and Switch 

Types of Link Aggregation:

1. Static Link Aggregation
With a static link aggregate, all configuration settings will be setup on both participating LAG components.
2.  Dynamic Link Aggregation: Link Aggregation Control Protocol (LACP)
Beyond that, Link Aggregation Control Protocol (LACP) allows the exchange of information with regard to the link aggregation between the two members of said aggregation. This information will be packetized in Link Aggregation Control Protocol Data Units (LACDUs).
Each individual port can be configured as an active or passive LACP using the control protocol.
·         Passive LACP: the port prefers not transmitting LACPDUs. The port will only transmit LACPDUs when its counterpart uses active LACP (preference not to speak unless spoken to).
·         Active LACP: the port prefers to transmit LACPDUs and thereby to speak the protocol, regardless of whether its counterpart uses passive LACP or not (preference to speak regardless).

In contrast to a static link aggregation, LACP provides the following advantages:

·         Even if one physical links fails, it will detect if the point-to-point connection is using a media converter, so that the link status at the switching port remains up. Because LACPDUs do not form a component of this connection, the link will be removed from the link aggregate. This ensures that packets will not be lost due to the failed link.
·         Both of the devices can mutually confirm the LAG configuration. With static link aggregation, errors in the configuration or wiring will often not be detected as quickly.

MC-LAG

MC-LAG adds node-level redundancy to the normal link-level redundancy that a LAG provides. This allows two or more nodes to share a common LAG endpoint. The multiple nodes present a single logical LAG to the remote end. Note that MC-LAG is vendor-specific; it is not covered by the IEEE 802.1AX-2008 standard.  Nodes in an MC-LAG cluster communicate to synchronize and negotiate automatic switchovers (failover). Some implementations may support administrator-initiated (manual) switchovers.
MC-LAG, or Multi-Chassis Link Aggregation Group, is a type of LAG with constituent ports that terminate on separate chassis, thereby providing node-level redundancy. Unlike link aggregation in general, MC-LAG is not covered under IEEE 802.1AX-2008. Its implementation varies by vendor.
IEEE 802.1AX Link Aggregation (LAG) technology has solved this using multipathing at Layer 2 and flow-based load balancing. However, the protocol constrains the network to a node-to-node topology. Organizations require a Layer 2 multipath solution that can provide dynamic flow-based load balancing to multiple network nodes.  MC-LAG is designed to address these requirements for today’s resilient and high-performance networks. 

-------END OF DOCUMENT-------

Thursday 29 May 2014

Routing Protocols And Types of Routing?

Routing Protocols
Routing protocols were created for routers. These protocols have been designed to allow the exchange of routing tables, or known networks, between routers. There are a lot of different routing protocols, each one designed for specific network sizes.

Classifying Routing Protocols 

Routing protocols can be classified into different groups according to their characteristics. Specifically, routing protocols can be classified by their:
  • Purpose: Interior Gateway Protocol (IGP) or Exterior Gateway Protocol (EGP)
  • Operation: Distance vector protocol, link-state protocol, or path-vector protocol
  • Behavior: Classful (legacy) or classless protocol
For example, IPv4 routing protocols are classified as follows:
  • RIPv1 (legacy): IGP, distance vector, classful protocol
  • IGRP (legacy): IGP, distance vector, classful protocol developed by Cisco (deprecated from 12.2 IOS and later)
  • RIPv2: IGP, distance vector, classless protocol
  • EIGRP: IGP, distance vector, classless protocol developed by Cisco
  • OSPF: IGP, link-state, classless protocol
  • IS-IS: IGP, link-state, classless protocol
  • BGP: EGP, path-vector, classless protocol
The classful routing protocols, RIPv1 and IGRP, are legacy protocols and are only used in older networks. These routing protocols have evolved into the classless routing protocols, RIPv2 and EIGRP, respectively. Link-state routing protocols are classless by nature.

Figure 3-9

Wednesday 28 May 2014

What is Spanning Tree Protocol

The Spanning Tree Protocol (STP) is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network.

The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include spare (redundant) links to provide automatic backup paths if an active link fails, without the danger of bridge loops, or the need for manual enabling/disabling of these backup links.

Spanning Tree Protocol (STP) is standardized as IEEE 802.1D.

Tuesday 27 May 2014

What is VLAN

“A virtual LAN (VLAN) is a group of networking devices in the same broadcast domain, logically”

The purpose of VLANs
The basic reason for splitting a network into VLANs is to reduce congestion on a large LAN.To understand this problem, we need to look briefly at how LANs have developed over the years.Initially LANs were very flat—all the workstations were connected to a single piece of coaxial cable, or to sets of chained hubs. In a flat LAN, every packet that any device puts onto the wire gets sent to every other device on the LAN.As the number of workstations on the typical LAN grew, they started to become hopelesslycongested; there were just too many collisions, because most of the time when a workstation tried to send a packet, it would find that the wire was already occupied by a packet sent by some other device.

 VLANs address issues such as scalability, security, and network management.

Without a Router or L3 Switch, the computers within each VLAN can communicate with each other but not with any other computers in another VLAN. For example, we need a Router or L3 Switch to transfer file between VLANs. This is called “interVLAN routing”.


To allow interVLAN routing you need to configure trunking ports on the link between L3 Switch and all L2 switch.(all vlan are tagged)

Systems and others networking devices will be connected to access ports of the switch.(respective vlan are untagged)

Reference diagram as below:


Monday 26 May 2014

Wireless Security from Attacks

Personal network often requried WPA2 - PSK to secure their network.

But Enterprise network managers too often assume strong encryption equals strong security for wireless LAN (WLAN) traffic

Certainly, WPA2 Enterprise offers better authentication and encryption options than many organizations deploy in their wired networks. But WLANs involve many other potential vulnerabilities: Rogue access points (APs); denial-of-service attacks against clients; and targeted attacks against WLAN infrastructure all can lead to leakage of sensitive data. The threat to enterprise WLANs is real and growing

Some of them features required in the real word from these attacks,

1. Rogue AP Detection & Rogue Containment
   A rogue access point is an AP not sanctioned or authorized by network administrators. Typically, rogue APs are connected to a network by well-intentioned employees unaware of the security risks they cause. Enhanced security monitoring enables faster response to these security breaches by performing the Security functions.
Once a rogue AP has been detected and classified, Dedicated air monitors provide a more effective way to perform rogue AP containment without negatively impacting the performance of the wireless network.

2. Protecting Wireless Client
   Valid Client on Unencrypted SSID
   Valid Client on Rogue AP With Valid SSID
   Penetration Attacks Against Valid Clients
     Disconnect Station Attacks
     Client Flooding Attacks
     Block ACK Attacks

3. Protecting Wireless Infrastructure
   Deauthentication and Disassociation Broadcast Attacks
   Frame Rate Anomaly Attacks
   Malformed Frame Attacks

Dedicated AMs or Sensors, Wireless Controller Combined with Wired NMS System provide a number of security-related enhancements over others.




What is RAID



RAID (originally redundant array of inexpensive disks; now commonly redundant array of independent disks) is a data storage virtualization technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy and performance improvement.
Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the specific level of redundancy and performance required. The different schemes or architectures are named by the word RAID followed by a number (e.g. RAID 0, RAID 1)

Before we start talking about the different RAID types, I'm going to define some basic concepts first.


Fault tolerance defined: Basic fault tolerance in the world of storage means your data is intact even if one or more hard drives fails. Some of the more expensive RAID types permit multiple hard drive failures without loss of data. There are also more advanced forms of fault tolerance in the enterprise storage world called path redundancy (AKA multi-path) which allows different storage controllers and the connectors that connect hard drives to fail without loss in service. Path redundancy isn't considered a RAID technology but it is a form of storage fault tolerance.
Storage performance defined: There are two basic metrics of performance in the world of storage. They are I/O performance and throughput. In general, read performance is more valued than write performance because storage devices spend the majority of their time reading data. I/O (Input/Output) performance is the measure of how many small random read/write requests can be processed in a single second and it is very important in the server world, especially database type applications. IOPS (I/O per second) is the common unit of measurement for I/O performance.
Throughput is the measurement of how much data can be read or written in a single second and it is important in certain server applications and very desirable for home use. Throughput is typically measured in MB/sec (megabytes transferred per second) though mbps (megabits per second) is sometimes also used to describe storage communication speeds. There is sometimes confusion between megabits versus megabytes since they sound alike. For example, 100 megabit FastEthernet might sound faster than a typical hard drive that gets 70 MB/sec but this would be like thinking that 100 ounces weighs more than 70 pounds. In reality, the hard drive is much faster because 70 MB/sec is equivalent to 560 mbps.
RAID techniques defined: There are three fundamental RAID techniques and the various RAID types can use one or more of these techniques. The three fundamental techniques are:
  • Mirroring
  • Striping
  • Striping with parity

Thursday 22 May 2014

What is iSCSI

Internet SCSI 

Internet Protocol (IP) is the most widely used standard anywhere. The technology is well understood. It’s easy to implement and is affordable. Most corporate data traffic uses a common IP network…except for storage data. Access to high performance storage data traditionally requires directattached devices or a Fibre Channel (FC) storage network. Internet SCSI (iSCSI) transports traditional high performance “block-based” storage data over a common IP network. Which means - it can be used in
remote mirroring, remote backup and similar applications since an IP
network has no distance limitations.

As iSCSI begins to achieve widespread market adoption, barriers to implementing and managing networked storage can be removed by incorporating IP networking into a storage network.

SCSI 

Most servers access storage devices through the Small Computer Systems Interface (SCSI) standard, moving “blocks” of data among computer systems. But its limitations became clear as demand for storage capacity grew.
SCSI’s built-in limitations on distance, number of devices supported and exclusive ownership of a server to its respective SCSI storage device prohibited the creation and sharing of a common pool of storage.

Wednesday 21 May 2014

What is the IP rating system?

Not to be confused by the more commonly known Internet Protocol address,  IP in this case stands for Ingress Protection, or International Protection, which is a rating system that defines how well an enclosure protects its internal electric equipment, such as the internal hardware of a camera, access points, against environmental factors such as dust and rain. The IP rating system was developed by the International Electrotechnical Commission, and is defined in the IEC 60529 standard. The rating consists of numerical values, with each digit defining a different aspect and level of protection afforded by the enclosure. When there is no protection with regards to one of the criteria, the digit is replaced with an X.

The first digit

The first digit refers to protection against solid objects, ranging from 1 (protection against accidental touches by hand) to 6 (complete protection against dust). Solid objects can refer to anything, including fingers, tools, wires, small wires, and dust. Check out the table below for the detailed explanation of each number.
RatingProtection
0No special protection
1Protection against solid objects greater than 50mm in diameter, such as a hand
2Protection against solid objects greater than 12.5mm in diameter, such as fingers
3Protection against solid objects greater than 2.5mm in diameter, such as screwdrivers and other tools
4Protection against solid objects greater than 1mm in diameter, such as wires
5Limited protection against dust, that is, no harmful deposit
6Complete protection of dust

The second digit

The second digit refers to protection against water intrusion, and ranges from 1 (protection against condensation) to 8 (immersion below 1m and under pressure). Once again, check out the table below for each rating number and the level of protection provided.
RatingProtection
0No protection
1Protection against vertically falling drops and condensation
2Protection against direct sprays of water, up to 15 degrees from vertical
3Protection against direct sprays of water, up to 60 degrees from vertical
4Protection against direct sprays of water from all directions. Limited ingress permitted
5Protection against low-pressure jets of water from all directions. Limited ingress permitted
6Protection against strong jets of water from all directions. Limited ingress permitted
7Protection against temporary immersion between 15cm and 1m for 30 minutes
8Protection against long periods of immersion over 1m and under pressure

As International Electrotechnical Commission, The National Electrical Manufacturers Association (NEMA) is the association of electrical equipment and medical imaging manufacturers in the United States defines standards for various grades of electrical enclosures typically used in industrial applications.
 A full list of NEMA enclosure types is available from the NEMA website http://www.nema.org

For YouTube Video, click this https://www.youtube.com/watch?v=bi3Rd6CYE6s



Tuesday 20 May 2014

WLAN ARCHITECTURES


There are two major approaches today for deploying WLAN Networks in the enterprise.
The two approaches have some basic philosophical differences which can have a major impact on deployment costs, security and manageability.
The first architecture to be presented is the so-called “Centralized” WLAN architecture. The Centralized architecture requires one or more servers or special purpose switches (Mobility controller) to be deployed in conjunction with wireless access points.  By Default In the centralized approach, all wireless traffic is sent through the WLAN Switch.  In either case, the centralized approach is considered to be an “Overlay” architecture. That is, it rides on top of the existing Ethernet Network.
Another approach is the “Distributed” WLAN architecture. AP have Built-in WLAN Security, layer 2 bridging, and access control features. Depending on the number of Aps required, Centralized management may be required. Distributed AP vendors may provide Centralized management tools or the AP’s will act as Virtual Mobility Controller.
1.       Data Forwarding:
The “Distributed” WLAN architecture approach is that the wireless traffic load is literally distributed across the Aps and does not depend on a centralized element to process all of the wireless traffic.
A “Centralized” WLAN architecture offers more choices, and thus more flexibility, than a “Distributed” WLAN architecture model. With a controller, organizations can choose to forward traffic locally at the APs (similar to the method used in “Distributed” WLAN architecture),
Or they can choose to tunnel certain types of traffic back to the controller for security reasons. With a “Centralized” WLAN architecture organizations have the flexibility to mix and match these approaches as appropriate.

2.       Deployment Scenario:
 In a “Distributed” WLAN architecture, you need to reconfigure your access layer with the addition of each new AP. Since it is necessary to configure all virtual LANs (VLANs) on the switch port that is needed by each new AP, your network administrator needs to configure the wiring closet switches that each new AP connects to. For example, you may have a VLAN for guest access, a VLAN for corporate access, and a VLAN for special access (such as VoIP). All these VLANs must be configured each time you add a new AP.
With a “Centralized” WLAN architecture, it is infinitely easier to add Aps when we send traffic to controller with Tunnel mode. The access layer is configured once at the handoff to the controller and the system manages the rest. The centralized controller provides rich functionality for automating deployment complexity, eliminating the need for frequent, error-prone changes to the access layer. You simply plug in the AP and it automatically self-configures.
Still if we required some AP to be configured for (bridge mode) sending traffic locally in “Centralized” WLAN architecture, those AP’s it is necessary to configure all virtual LANs (VLANs) on the switch port that is needed.

Reference Diagram as Below:



                                     Controller Based with AP in Bridge Environment

















Monday 19 May 2014

TTLS, TLS and PEAP Comparison



Broadly speaking, the history of 802.11 security is an attempt to address two major problems.  The first problem is that the protocols used to authenticate network users were not strong, so unauthorized users could easily access  network resources.  Second, the Wired Equivalent Privacy (WEP) system proved insufficient for a number of well publicized reasons.  In response to user concerns about weak security, the industry began developing a series of stronger protocols for use with wirelessLANs.   The  key  standard  is  IEEE  802.1X,  which  provides  both  stronger  authentication  and  a mechanism  for deriving and distributing stronger keys to bolster WPA/WPA2.

Authentication Protocol Requirements

The dual requirement of  strong encryption to prevent eavesdropping and mutual authentication to ensure that
sensitive  information  is  transmitted  only  over  legitimate  networks,  must  drive  your  wireless  authentication strategy.

Exchanging user authentication credentials over a wireless network must be done with great care because traffic interception is much easier.  Attackers require physical access to the network medium to intercept transmissions, but  radio  waves  cannot  easily  be  confined  to  a  physical  facility.   Without  the  security  of  a  direct  physical connection, cryptographic safeguards must be built into the protocols for two reasons. 

 First, and most obvious, is to prevent attackers from recovering user credentials as they travel over the radio link.   Secondly, unauthorized "rogue" access points may be set up in an attempt to collect credentials from unsuspecting users.  Cryptography can provide the necessary assurance that users are connecting to an authorized and secured network.

802.1X is based on the Extensible Authentication Protocol (EAP), and so it offers the choice of several methods to  protect  authentication  exchanges.   In  practice,  authentication  methods  based  on  the  IETF's  well-known Transport Layer Security (TLS) standard can satisfy strict encryption and authentication requirements. Three TLS based protocols have been developed for use with EAP and are suitable for deployments with wireless LANs:
EAP-Transport Layer Security (EAP-TLS), Tunneled Transport Layer Security (TTLS), Protected EAP (PEAP).

EAP-TLS

EAP-TLS uses the TLS handshake as the basis for authentication. TLS itself has many attributes that make it
attractive for security-related use. It is well documented and has been analyzed extensively, and cryptanalysis of the  protocol  has  not  yet  revealed  significant  weaknesses  in  the  protocol.   TLS  performs  authentication  by exchanging  digital  certificates.   The  server  presents  a  certificate  to  the  client.   After  validating  the  server's certificate, the client presents a client certificate. Naturally, the certificate should be protected on the client by a passphrase, PIN, or stored on a smart card, depending on the implementation.
The central role of certificates is the Achilles heel of EAP-TLS.  If no PKI exists, it must be deployed before EAPTLS  can be used in a network.   Certificate management is a time-consuming and  cumbersome administrative task, especially because certificates must be revoked as users lose access to the wireless network.  In addition to issuing  certificates,  on-line  validity  checks  are  mandatory.   Furthermore,  an  existing  PKI  may  be  insufficient because most EAP-TLS implementations require the presence of certain attributes that were not defined when early PKI systems were rolled out.  A final risk is that EAP-TLS by itself protects the user's authentication material, but not the user identity.  The bottom line is that EAP-TLS is secure, but the requirement for client certificates is a large hurdle that makes TTLS and PEAP attractive.

TLS Tunneling with TTLS and PEAP 

Both TTLS and PEAP use the inherent privacy of the TLS tunnel to safely extend older authentication methods, such as username/password or token card authentication, to the wireless network.  Both are two-stage protocols that establish a strongly encrypted "outer" TLS tunnel in stage one and then exchange authentication credentials through  an "inner" method  in  stage  two.   Both  TTLS-  and PEAP-capable  RADIUS  servers  can  be  used  with existing authentication systems.   RADIUS proxy abilities can extend existing databases, directories, or one-time password systems for use with wireless LANs.
TTLS uses the TLS channel to exchange "attribute-value pairs" (AVPs), much like RADIUS.  The flexibility of the AVP  mechanism  allows  TTLS  servers  to  validate  user  credentials  against  nearly  any  type  of  authentication mechanism. TTLS implementations today support all methods defined by EAP, as well as several older methods (CHAP, PAP, MS-CHAP and MS-CHAPv2).   PEAP uses the TLS channel to protect a second EAP exchange,called the "inner" EAP exchange.    Most  supplicants  support EAP-MS-CHAPv2 for the inner exchange, which allows PEAP to use external user databases.  Other common EAP methods supported by PEAP supplicants are EAP-TLS and generic token card (EAP-GTC).
PEAP's major  advantage  is  support  from Microsoft,  and  therefore,  built-in  support  from  the  operating  system.
PEAP support is a standard feature in Windows XP and available as a Microsoft feature pack for Windows 2000. Microsoft supplicants (wireless clients) are tightly integrated with the base operating system and can therefore provide single sign on capabilities by using the same user credentials for both Windows sign-on and wireless LAN authentication.  Microsoft supplicants, however, do not support the use of token cards.  Cisco PEAP supplicants do support EAP-GTC, but Cisco and Microsoft have implemented PEAP in different ways that are not compatible.
Recommendations
Secure  wireless  LAN  deployments require PKI  to  be  deployed  in  a  supporting role.   Certificates  are  used  to establish a secure authentication channel in any case.  One of the first decisions to be made is whether the cost of issuing client certificates is one worth accepting.   In many cases, an existing PKI can be used to support a wireless LAN deployment.   Organizations which have not already deployed PKI should consider TTLS or PEAP instead, with an appropriate inner authentication method.

Difference between IPv4 and IPv6

What is Internet Protocol?
Internet Protocol is a set of technical rules that defines how computers communicate over a network. There are currently two versions: IP version 4 (IPv4) and IP version 6 (IPv6).

What is IPv4?
IPv4 was the first version of Internet Protocol to be widely used, and accounts for most of today’s Internet traffic. There are just over 4 billion IPv4 addresses. While that is a lot of IP addresses, it is not enough to last forever. 

What is IPv6?
IPv6 is a newer numbering system that provides a much larger address pool than IPv4. It was deployed in 1999 and should meet the world’s IP addressing needs well into the future.

What is the major difference?
The major difference between IPv4 and IPv6 is the number of IP addresses. There are 4,294,967,296 IPv4 
addresses. In contrast, there are 340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses.
The technical functioning of the Internet remains the same with both versions and it is likely that both versions will continue to operate simultaneously on networks well into the future. To date, most networks that use IPv6 support both IPv4 and IPv6 addresses in their networks.



Sunday 18 May 2014

A Comparison of OSI Model vs. TCP/IP Model



There are seven layers in the OSI Model, only four in the TCP/IP model. This is because TCP/IP assumes that applications will take care of everything beyond the Transport layer. The TCP/IP model also squashes the OSI's Physical and Data Link layers together into the Network Access Layer. 


OSI Model - Foundation for all communications that take place between computer and other networking devices.

TCP/IP is a suite of protocols that work together to provide communication between network devices.

Note: OSI Model is just that a model, it is not a protocol that can be installed or run on any system.
TCP/IP on other hand, is a functioning protocol that enables computers to communicate.

Friday 16 May 2014

Understanding Wireless Authentication and Encryption

A strong understanding of authentication and encryption is essential to deploy a secure and functional WLAN. Evaluate the different options against the goals of the organization and the security and operational requirements that the organization operates under. The number of different authentication and encryption options that must be supported also influences the design of the WLAN and the number of SSIDs that must be broadcast.
In general, each new authentication type or encryption mode that is required means that an additional SSID must be deployed. To preserve radio resources, organizations should consider the types of devices to be deployed and attempt to limit the number of SSIDs. Remember that each SSID that is deployed appears as an individual AP, and it must beacon, which uses up valuable airtime.
Wi-Fi networks have multiple authentication methods available for use. Each method depends on the network goals, security requirements, user types, and client types that will access the network. Consider the types of data that will flow over the network, as that will narrow the authentication and encryption choices.
Layer 2 authentication occurs before the client can complete a connection to the network and pass traffic. As the name suggests, the client does not have an IP address at this stage.
Open authentication really means no authentication. The network is available for anyone to join and no keys are required. This form of authentication is often combined with a Layer 3 authentication method that is used after connection to the network.
Wired equivalent privacy (WEP) is the original security mechanism that was built into the 802.11 standard, and several variations are available. The most common version is static WEP where all stations share a single key for authentication and encryption. Other versions of WEP have different key lengths and dynamic key assignments.
As an authentication and encryption protocol, WEP was fully compromised in 2001. Automated tools make it easy to access a WEP network with no expertise or training. WEP is considered no more secure than an open network. Most recommends that all organizations discontinue the use of WEP and replace any older WEP only devices with more capable systems as soon as is practical.
MAC authentication is an early form of filtering. MAC authentication requires that the MAC address of a machine must match a manually defined list of addresses. This form of authentication does not scale past a handful of devices, because it is difficult to maintain the list of MAC addresses. Additionally, it is easy to change the MAC address of a station to match one on the accepted list. This spoofing is trivial to perform with built-in driver tools, and it should not be relied upon to provide security.
MAC authentication can be used alone, but typically it is combined with other forms of authentication, such as WEP authentication. Because MAC addresses are easily observed during transmission and easily changed on the client, this form of authentication should be considered nothing more than a minor hurdle that will not deter the determined intruder. Most recommends against the use of MAC-based authentication.
Pre-shared key (PSK) authentication is the most common form of authentication for consumer Wi-Fi routers. Like WEP, the key is used both for both authentication and encryption. In enterprise deployments, PSK is often limited to devices that cannot perform stronger authentication. All devices share the same network key, which must be kept secret. This form of authentication is easy to configure for a small number of devices. However, when more than a few devices must use the key, key management quickly becomes difficult.
The key usually must be changed manually on devices, which poses more problems if the number of devices that share a key is very large. When an attacker knows the key, they can connect to the network and to decrypt user traffic. Good security practice mandates that the key should be changed whenever someone with access to the key leaves the organization.
In some guest deployments, PSK is used to provide a minimum amount of protection for guest sessions, and authentication is performed by a Layer 3 mechanism. This key should also be rotated on a regular basis.
802.1X was developed to secure wired ports by placing the port in a “blocking” state until authentication is completed using the Extensible Authentication Protocol (EAP). The EAP framework allows many different authentication types to be used, the most common being Protected EAP (PEAP), followed by EAP-TLS that uses server- and client-side certificates.
To secure user credentials, a Transport Layer Security (TLS) tunnel is created and user credentials are passed to the authentication server within the tunnel. When the authentication is complete, the client and the Mobility Controller (tunnel mode) or AP (decrypt tunnel and bridge modes) has copies of the keys that are used to protect the user session. The 802.1X handshake is seen in below Figure .
The  Mobility Controller forwards the request to the RADIUS server that performs the actual authentication and sends a response to the Aruba controller. When authentication completes successfully, the RADIUS server passes encryption keys to the  Mobility Controller. Any vendor-specific attributes (VSAs) are also passed, which contain information about the user. A security context is created, and for encrypted links, key exchange occurs where all traffic can now be encrypted.
Mobility Controller uniquely supports the AAA FastConnect feature, which allows the encrypted portions of 802.1X authentication exchanges to be terminated on the Mobility controller. The hardware encryption engine dramatically increases scalability and performance. AAA FastConnect is supported for PEAP-MSCHAPv2, PEAP- GTC, and EAP-TLS. When AAA FastConnect is used, external authentication servers do not need to handle the cryptographic components of the authentication process. AAA FastConnect permits several hundred authentication requests per second to be processed, which increases authentication server scalability. The complete authentication process is seen in Figure .
If the user already exists in the active user database and now attempts to associate to a new AP, the mobility controller understands that an active user has moved, and it restores the user connectivity state. 
Machine authentication authenticates Windows-based machines that are part of an Active Directory domain. Before the user logs in, the machine authenticates to the network and proves that it is a part of the domain. After that authentication succeeds or fails, the user can log in using 802.1X. Based on the combinations of success or failure, different roles on the system are assigned. Table describes the possible condition states.
Table  Machine Authentication Pass or Fail Matrix
Machine Auth Status
Machine authentication and user authentication fails. Layer 2 authentication fails.
Machine authentication fails (for example, the machine information is not present on the server). User authentication succeeds. Server-derived roles do not apply.
authentication is configured in the 802.1X authentication profile.
Machine authentication succeeds and user authentication has not been initiated. Server-derived roles do not apply.
Machine authentication default machine role is configured in the 802.1X authentication profile.
The machine and user are successfully authenticated. If server-derived roles have been defined, the role assigned via the derivation take precedence. This case is the only one where server-derived roles are applied.
A role that is derived from the authentication server takes precedence. Otherwise, the 802.1X authentication default role that is configured in the AAA profile is assigned.
For clients that do not support Wi-Fi Protected Access® (WPA™), VPN, or other security software, Aruba supports a web-based captive portal that provides secure browser-based authentication and third-party captive portals. Captive portal authentication is encrypted using Secure Sockets Layer (SSL) to protect credentials. Captive portal authentication supports:
Captive portal uses the Aruba integrated internal database and guest provisioning system to provide a secure guest access solution. Captive portal permits front-desk staff to issue and track temporary authentication credentials for individual visitors (see Figure ).
Typically a guest user connects to the guest SSID, which requires no 802.11 (Layer 2) authentication and provides no encryption, and is placed in a state that requires login. When the user opens a web browser, a captive portal screen asks them to enter credentials, enter an email address, or simply accept a set of service terms. The captive portal page can be customized with a different background, content, and terms of service. Authentication with the mobility controller is protected in an SSL/TLS tunnel. After the captive portal authentication completes, user traffic passes through the controller and without 802.11 (Layer 2) encryption, which leaves transmissions open to interception. Clients should be encouraged to use their own encryption, such as VPN, when using open network connections.
When WEP was compromised, many organizations did not want to give up the convenience of wireless networks, but they needed something with stronger security until the 802.11i amendment was finalized and available. Many organizations resorted to using existing VPN infrastructure to secure the WLAN. This approach provided the security personnel with the sense that they were using a well-known and trusted form of security. The traditional VPN u-turn is seen in Figure .
Figure  VPN over Wi-Fi
The downside is that the VPN infrastructure was not designed for LAN network speeds. The VPN infrastructure was designed to be used across relatively slow WAN connections. End users who expect wire-like speed from the 802.11n network will not be satisfied with VPN over Wi-Fi. Additionally, VPN concentrators had expensive per-seat licenses that were expected to be shared across multiple users who connected for short periods, not extended-use sessions of workers who connected on the campus. The VPN solution is more expensive for the organization because more licenses and VPN concentrators must be acquired.
Table  summarizes the  recommendations for authentication methods.
Recommended only for securing guest access or for devices that do not support stronger authentication. Recommend captive portal after PSK authentication where possible. Change the key often.
The network administrator must not only authenticate devices, but must also select a form of encryption (if any) that will be applied on the physical connection between the user device and the AP. Encryption is strongly recommended in most cases, because the wireless transmissions of an organization are easily captured or “sniffed” directly in the air during transmission.
As the name implies, open networks have no encryption and offer no protection from wireless packet captures. Most hot spot or guest networks are open networks, because the end user is expected to use their own protection methods to secure their transmissions, such as VPN or SSL.
Though WEP is an authentication method, it is also an encryption algorithm where all users typically share the same key. As mentioned previously, WEP is easily broken with automated tools, and should be considered no more secure than an open network. Aruba recommends against deploying WEP encryption. Organizations that use WEP are strongly encouraged to move to Advanced Encryption Standard (AES) encryption.
The Temporal Key Integrity Protocol (TKIP) was a stopgap measure to secure wireless networks that previously used WEP encryption and whose 802.11 adapters were not capable of supporting AES encryption. TKIP uses the same encryption algorithm as WEP, but TKIP is much more secure and has an additional message integrity check (MIC). Recently some cracks have begun to appear in the TKIP encryption methods. Aruba recommends that all users migrate from TKIP to AES as soon as possible.
The Advanced Encryption Standard (AES) encryption algorithm is now widely supported and is the recommended encryption type for all wireless networks that contain any confidential data. AES in Wi-Fi leverages 802.1X or PSKs to generate per station keys for all devices. AES provides a high level of security, similar to what is used by IP Security (IPsec) clients. Aruba recommends that all devices be upgraded or replaced so that they are capable of AES encryption.
In most instances, a new encryption type requires an additional SSID to support that new encryption mode. Mixed mode allows APs to combine TKIP and AES encryption on the same SSID. The encryption type is selected based on what the client station supports, and the strongest encryption possible is used for each client.
Table  summarizes the recommendations for encryption on Wi-Fi networks. As a reminder, full 802.11n rates are only available when using either open (no encryption) or AES encrypted networks. This is a standards requirement for 802.11n.
The Wi-Fi Alliance is a trade group that is made up of 802.11 hardware vendors. The Wi-Fi Alliance created the Wi-Fi Protected Access (WPA) and WPA2™ certifications to describe the 802.11i standard. The standard was written to replace WEP, which was found to have numerous security flaws.
It was taking longer than expected to complete the standard, so WPA was created based on a draft of 802.11i, which allowed people to move forward quickly to create more secure WLANs. WPA2 encompasses the full implementation of the 802.11i standard. Table summarizes the differences between the two certifications. Remember that WPA2 is a superset that encompasses the full WPA feature set.
Temporal Key Integrity Protocol (TKIP) with message integrity check (MIC)
Advanced Encryption Standard – Counter Mode withCipher Block Chaining Message Authentication Code (AES-CCMP)
Table summarizes the recommendations for authentication and encryption combinations that should be used in Wi-Fi networks.
AES if possible, TKIP or WEP if necessary(combine with restricted PEF user role).