Wednesday, March 3, 2010

1, 2, 3 or More - How Many Cisco Switches in Your Lab?

When building a lab to use to practice for CCNA or CCNP, you need Cisco switches. Part of the decision must be how many switches you can afford. And of course you'll want to know whether adding that extra switch really helps you study and practice, or not - and that's hard to know until you understand the switching topics in a particular exam. Today (and the next several posts), I'll discuss the topology issues comparing the 1, 2, and 3 switch options, first for CCNA and then for CCNP.

The choice of how many switches, and which models, tends to give people a little more trouble than the similar decision about routers. Why? First, so unless you're relying on a Simulator, you'll need real switches. And there are plenty of features that require a 2nd switch for any meaningful practice - STP and VTP come to mind immediately - but then it's hard to know how much you need 2 or even 3 switches until you're pretty far into studying for CCNA. So let's start with a CCNA breakdown for a 1 switch and 2 switch topology, looking at features that can be practiced reasonably well in each case.

For both topologies, I would expect you to have at least 2 (preferably 3) other devices to drive traffic for testing. For CCNA, you need routers as well. So I'll count on your having two routers, and a PC from which to configure things.
If you look at all the CCNA switching topics, a lot of them can be done in a lab with a single switch. For example, for CCNA:

• basic administration and CLI practice (passwords, hostnames, banners)
• VLANs
• Interfaces (speed, duplex, auto-n)
• IP access to switch
• 802.1Q trunking with a router
• Voice VLAN
• STP portfast

But as with routers, there are several CCNA features that just can't be practiced meaningfully without at least two switches. Then, when you get to CCNP SWITCH and TSHOOT, there are several more. For instance, for CCNA:

• VLAN Trunking Protocol (VTP)
• Spanning Tree Protocol (STP)
• Switch-switch 802.1Q trunking
• Etherchannel

For CCNP, I can appreciate the fact that you may be forced to use a single switch for your lab, just due to cost. 1/3 of the core page count of the book focuses on topics like STP (and its many variations), VTP, trunking, Etherchannel, all of which need at least two switches. The bigger question for CCNP (in my opinion) is whether you spring for a 3rd or 4th switch, and whether you make any of those layer 3 switches. I'll get into the layer 3 tradeoffs in the CCNP lab series (next up in the list), but this 3-4 post series will look hard at the layer 2 features related to the question of adding a 3rd switch to a CCNP lab.

• Uses the same triangle design in most campus switch designs
• Allows configuration and meaningful testing of all CCNP STP features
• Much more interesting STP topologies for more meaningful practice
• More meaningful VTP experiments (eg, one each server, client, transparent)

Monday, February 22, 2010

The Kneber botnet revealed

Infiltration of Kneber reveals interesting data, but what is the threat?
Security vendor Net Witness recently tapped into the logs of a command-and-control server for a botnet it calls Kneber, which has infected at least 75,000 computers at 2,500 companies and government agencies worldwide. Here are some answers to frequently asked questions about the botnet.

What exactly is the Kneber botnet?

It's a botnet discovered Jan. 26, 2010, by Net Witness that compromised 74,000 computers via the ZeuS Trojan and gathered logon and password information from them. Net Witness announced its discovery Thursday.

Where did it get its name?

The name comes from the registrant for the original domain used to pull together various components of the botnet -- hilarykneber@yahoo.com.

How old is it?

The first activity from it was March 25, 2009.

Is it out of business now?

No. After a command-and-control server for it was traced to Germany, its URL was changed, and it's running just as it was before it was discovered. The data gleaned from the server has been turned over to law enforcement agencies and major companies with employees whose computers were bots have been notified.

What damage can it do?

Individuals whose personal data was mined might suffer financial loss if criminals use the data to transfer funds out of their accounts.

What exactly is the ZeuS Trojan?

ZeuS, also called Zbot, is a very effective cyber crime tool that is routinely updated, made more sophisticated and more stealthy. It can present a different profile in each computer it infects, making it difficult to catch using signatures.

What do cyber criminals use it for?

It's often used to gather user logons and passwords, and injects its own fields into Web pages seeking more detailed information about the user's identity. But it can also steal whatever data is on a computer, can enable remote control of compromised machines and can download other malware. It also periodically uploads what it gathers to command-and-control Web servers.

How dangerous is it?

It is ranked as the most dangerous type of botnet in operation by the security firm Damballa, and 1,313 ZeuS command-and-control servers have been identified by Zeus Tracker. A ZeuS botnet was once used to steal records of people looking for jobs through Monster.com.

Why has it been around for so long?

The bot-creator is constantly upgraded to be less detectable and more flexible. It is encrypted and it adopts root kit characteristics to hide in infected machines. It is sold for about $4,000 per copy, so there are many cyber gangs using it to create botnets that they use for their individual illicit activity.

Is there any hope of stopping it?

Competition may help. A Trojan called Spy Eye does much the same thing as ZeuS and comes with a Zeus uninstaller, so if it hits on a machine already enlisted in a ZeuS bot, it can kick out Zeus and claim machine for itself. Of course, the computer is still a bot, just with a different commander.

Tuesday, February 2, 2010

10 fool-proof predictions for the Internet in 2020

1. More people will use the Internet.

Today's Internet has 1.7 billion users, according to Internet World Stats. This compares with a world population of 6.7 billion people. There's no doubt more people will have Internet access by 2020. Indeed, the National Science Foundation predicts that the Internet will have nearly 5 billion users by then. So scaling continues to be an issue for any future Internet architecture.

2. The Internet will be more geographically dispersed.

Most of the Internet's growth over the next 10 years will come from developing countries. The regions with the lowest penetration rates are Africa (6.8%), Asia (19.4%) and the Middle East (28.3%), according to Internet World Stats. In contrast, North America has a penetration rate of 74.2%. This trend means the Internet in 2020 will not only reach more remote locations around the globe but also will support more languages and non-ASCII scripts.

3. The Internet will be a network of things, not computers.

As more critical infrastructure gets hooked up to the Internet, the Internet is expected to become a network of devices rather than a network of computers. Today, the Internet has around 575 million host computers, according to the CIA World Factbook 2009. But the NSF is expecting billions of sensors on buildings and bridges to be connected to the Internet for such uses as electricity and security monitoring. By 2020, it's expected that the number of Internet-connected sensors will be orders of magnitude larger than the number of users.

4. The Internet will carry exabytes — perhaps zettabytes — of content.

Researchers have coined the term "exaflood" to refer to the rapidly increasing amount of data — particularly high-def images and video – that is being transferred over the Internet. Cisco estimates that global Internet traffic will grow to 44 exabytes per month by 2012 — more than double what it is today. Increasingly, content providers such as Google are creating this content rather than Tier 1 ISPs. This shift is driving interest in re-architecting the Internet to be a content-centric network, rather than a transport network.

5. The Internet will be wireless.

The number of mobile broadband subscribers is exploding, hitting 257 million in the second quarter of 2009, according to Informa. This represents an 85% increase year-over year for 3G, WiMAX and other higher speed data networking technologies. Currently, Asia has the most wireless broadband subscribers, but the growth is strongest in Latin America. By 2014, Informa predicts that 2.5 billion people worldwide will subscribe to mobile broadband.

6. More services will be in the cloud.

Experts agree that more computing services will be available in the cloud. A recent study from Telecom Trends International estimates that cloud computing will generate more than $45.5 billion in revenue by 2015. That's why the National Science Foundation is encouraging researchers to come up with better ways to map users and applications to a cloud computing infrastructure. They're also encouraging researchers to think about latency and other performance metrics for cloud-based services.

7. The Internet will be greener.

Internet operations consume too much energy today, and experts agree that a future Internet architecture needs to be more energy efficient. The amount of energy consumed by the Internet doubled between 2000 and 2006, according to Lawrence Berkeley National Laboratory. But the Internet's so-called Energy Intensity is growing at a slower rate than data traffic volumes as networking technologies become more energy efficient. The trend towards greening the Internet will accelerate as energy prices rise, according to experts pushing energy-aware Internet routing.

8. Network management will be more automated.

Besides weak security, the biggest weakness in today's Internet is the lack of built-in network management techniques. That's why the National Science Foundation is seeking ambitious research into new network management tools. Among the ideas under consideration are automated ways to reboot systems, self-diagnosing protocols, finer grained data collection and better event tracking. All of these tools will provide better information about the health and status of networks.

9. The Internet won't rely on always-on connectivity.

With more users in remote locations and more users depending on wireless communications, the Internet's underlying architecture can no longer presume that users have always-on connections. Instead, researchers are looking into communications techniques that can tolerate delays or can forward communications from one user to another in an opportunistic fashion, particularly for mobile applications. There's even research going on related to an inter-planetary Internet protocol, which would bring a whole new meaning to the idea of delay-tolerant networking.

10. The Internet will attract more hackers.

In 2020, more hackers will be attacking the Internet because more critical infrastructure like the electric grid will be online. The Internet is already under siege, as criminals launch a rising number of Web-based attacks against end users visiting reputable sites. Symantec detected 1.6 million new malicious code threats in 2008 – more than double the 600,000 detected the previous year. Experts say these attacks will only get more targeted, more sophisticated and more widespread in the future.
More than anything else, computer scientists who are working on redesigning the Internet are trying to improve its security. Experts agree that security cannot be an add-on in a redesign of the Internet. Instead, the new Internet must be built from the ground up to be a secure communications platform. Specifically, researchers are exploring new ways to ensure that the Internet of 2020 has confidentiality, integrity, privacy and strong authentication.

Wednesday, December 30, 2009

World is in your home now



Turn your living room TV into a video phone with an optional webcam.Create a video conference with the office.Record holidays without setting up the tripod and camcorder.

The Home Theater PC-the center of your new home theater network.Blu-Ray drive for playing and burning your movies and recorded television.Standard DVD dual layer burner plays and records your media.Touch screen LCD.Completely integrates with your existing home theater hardware(including DVD players and recorders,Stereo head units,speaker system,etc.)

Connects your Standard or HD TV to your PC network, allowing you to stream multimedia content from your Home Theater PC to any TV in the house.Stream pictures, music,even videos.Works with wired eithernet or wireless G.

Built in mouse and remote control give you ultimate control of your Home Theater PC-from your couch.One button access allows you to switch between writing e-mails and watching TV instantly.

Your optional Universal Remote is totally programmable.Plug your remote into your PC to program each device in your home theater.Program combinations-Turn on your TV and Stereo system with one button.Built in LCD screen lets you see exactly what options, menus and devices you are selecting.Program your TV,VCR,DVD player,Cable or Satellite box,everything but the kitchen sink.

DVD Jukebox allows you to load up to 200 movies at once,all of which instantly accessible on your Home Theater PC.Totally expandable.Add additional jukebox's as needed-capable of storing and accessing over 1000 DVD's at the touch of a button.Burns your recorded TV shows.

Holding up to 1,000 GB of data,this network accessible hard drive expands your storage capacity and simplifies file sharing.Built in USB port allows you to connect a printer-which every PC in the house can print to.Gigabit eithernet ensures blazing fast access to your data.Forget your work disk at home? Not a problem.Access this drive from ANY PC with an internet connection.

Wifi routers allow your PC's to connect to each other and the internet, safely and securely.Walk from one room to another with your laptop PC while surfing the internet.

Multimedia drive allows you to download your multimedia from your flash memory disks from your digital cameras, camcorders, and cell phones.Additional ports allow you to directly connect a camcorder or camera to download media.

Computer Spyware Protection

What is Spyware/Adware

Spyware is software that has been created to track and report what you do on the computer! Some of the "worst" spyware will actually search your hard drive for personal information, credit card numbers, bank accounts, passwords, and other confidential information.

Why Do I Need to Remove Spyware/Adware

Spyware and or, malware has been specifically designed to be difficult to remove. Once you have spyware or adware on your computer, many virus removal programs and few firewalls will not be able to "touch" or "remove" it. Spyware has become the number one threat to all internet users world wide. It is possible that 9 out of every 10 computers are infected. Spyware can destroy your pc and the functions you are tyring to accomplish on the internet.

Home Wireless Networks


That One Computer Guy can set up your secure, home wireless network. A sample network is shown here:

Monday, December 14, 2009

Cloud Computing

Cloud computing is Internet-("cloud-") based development and use of computer technology ("computing").In concept, it is a paradigm shift whereby details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them.It typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet.

The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams and is an abstraction of the underlying infrastructure it conceals.Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on the servers.

These applications are broadly divided into the following categories: Software as a Service (SaaS), Utility Computing, Web Services, Platform as a Service (PaaS), Managed Service Providers (MSP), Service Commerce, and Internet Integration. The name cloud computing was inspired by the cloud symbol that is often used to represent the Internet in flow charts and diagrams."

Cloud computing users can avoid capital expenditure (CapEx) on hardware, software, and services when they pay a provider only for what they use. Consumption is usually billed on a utility (resources consumed, like electricity) or subscription (time-based, like a newspaper) basis with little or no upfront cost. Other benefits of this time sharing-style approach are low barriers to entry, shared infrastructure and costs, low management overhead, and immediate access to a broad range of applications. In general, users can terminate the contract at any time (thereby avoiding return on investment risk and uncertainty), and the services are often covered by service level agreements (SLAs) with financial penalties.

According to Nicholas Carr, the strategic importance of information technology is diminishing as it becomes standardized and less expensive. He argues that the cloud computing paradigm shift is similar to the displacement of electricity generators by electricity grids early in the 20th century.

Although companies might be able to save on upfront capital expenditures, they might not save much and might actually pay more for operating expenses. In situations where the capital expense would be relatively small, or where the organization has more flexibility in their capital budget than their operating budget, the cloud model might not make great fiscal sense. Other factors impacting the scale of any potential cost savings include the efficiency of a company’s data center as compared to the cloud vendor’s, the company's existing operating costs, the level of adoption of cloud computing, and the type of functionality being hosted in the cloud.

Types by visibility

Public cloud

Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.

Hybrid cloud

A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".A hybrid cloud can describe configuration combining a local device, such as a Plug computer with cloud services. It can also describe configurations combining virtual and physical, colocated assets—for example, a mostly virtualized environment that requires physical servers, routers, or other hardware such as a network appliance acting as a firewall or spam filter.

Private cloud

Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualisation automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticized on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".

While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT,there is some uncertainty whether they are a reality even within the same firm.Analysts also claim that within five years a "huge percentage" of small and medium enterprises will get most of their computing resources from external cloud computing providers as they "will not have economies of scale to make it worth staying in the IT business" or be able to afford private clouds.Analysts have reported on Platform's view that private clouds are a stepping stone to external clouds, particularly for the financial services, and that future datacenters will look like internal clouds.

The term has also been used in the logical rather than physical sense, for example in reference to platform as a service offerings,though such offerings including Microsoft's Azure Services Platform are not available for on-premises deployment.

Types by services

Services provided by cloud computing can be split into three major categories.

Infrastructure-as-a-Service (IaaS)

Infrastructure-as-a-Service like Amazon Web Services provides virtual servers with unique IP addresses and blocks of storage on demand. Customers benefit from an API from which they can control their servers. Because customers can pay for exactly the amount of service they use, like for electricity or water, this service is also called utility computing.

Platform-as-a-Service (PaaS)

Platform-as-a-Service is a set of software and development tools hosted on the provider's servers. Developers can create applications using the provider's APIs. Google Apps is one of the most famous Platform-as-a-Service providers. Developers should take notice that there aren't any interoperability standards (yet), so some providers may not allow you to take your application and put it on another platform.

Software-as-a-Service (SaaS)

Software-as-a-Service (SaaS) is the broadest market. In this case the provider allows the customer only to use its applications. The software interacts with the user through a user interface. These applications can be anything from web based email, to applications like Twitter or Last FM.