Super Smartphone Apps For Network Administrators

iphoneCappsNetwork analysis tools are a must-have for networking professionals, providing crucial insight into performance and helping to solve bottlenecks and slowness. The right statistics and data about traffic flows, device configurations, and user behavior can identify problems quickly, or even before they actually happen.

Having that information immediately accessible — literally in the palm of your hand — can make things even easier. The use of mobile apps has exploded, and software has matured from games and entertainment to tools robust enough to use on the job. IT pros can increase their productivity and save precious time with access to network data with a simple tap on a smartphone, whether they are in the office, relaxing at home, or commuting on the train.

Here we highlight some of the highest user-rated network utilities — some developed for iOS, some for Android, and some available for both. We chose independent tools that are not simply an extension of a larger network management platform, so no other software is necessary. An extra bonus is that most of these apps are free or a minimal cost,

Network Troubleshooting Device Cleanup

istock_000019537569_networkI can’t stress enough how helpful and important it is to understand the protocols used by your devices, operating system and applications. When I touch on this topic, the typical response from networking pros is, “It’s not our problem.”

I understand that in most cases, the network staff is not responsible for desktop configurations. But, since computers are responsible for generating additional traffic and possible issues, I believe networking staff should be familiar with desktop protocols and how to generally optimize them. Cleaning up unused protocols and services will enable you to establish a baseline to streamline network troubleshooting.

I have seen computers set up as DHCP servers, access points, routers, and the list goes on. In this video, I talk about configurations that cause obvious operational issues, but it doesn’t have to be that dramatic. I love telling crowds about one of my most recent troubleshooting engagements where the “X-Files”-type performance issues ended up being a misconfigured printer. In short, the printer was configured as an IPv6 DHCP server and router. So everyone had to route through the printer

Building A WAN Using Cisco DMVPN

jguofuoDynamic Multipoint VPN technology enables organizations to connect their offices via VPNs built over the Internet. Here’s how it works.

WANs have been around for a long time. The first networks were built for local needs but once they became popular, there was a need to be able to connect offices to each other. Frame Relay was one popular technology for building WANs and connecting offices. Frame Relay, a non-broadcast multiple access (NBMA) technology, was normally used to build a hub-and-spoke network. In hub-and-spoke networks, traffic is passed from spokes to the hub and then to other spokes.

Today, many organizations are turning to the Internet for their WAN needs. Why? There’s potentially a lot of money to save by buying Internet circuits as opposed to other technologies such as Multiprotocol Label Switching (MPLS). For large organizations with many WAN circuits, savings can be in the millions range.

Dynamic Multipoint VPN (DMVPN) is a Cisco technology that’s very popular for building VPNs over the Internet. This is a design blog so we won’t dive into the technical details, but here’s a brief overview of how the technology works and available designs.

Why the Data Deluge Leaves Us Struggling to Make Up Our Minds

We make a huge number of decisions every day. When it comes to eating, for example, we make 200 more decisions than we’re consciously aware of every day. How is this possible? Because, as Daniel Kahneman has explained, while we’d like to think our decisions are rational, in fact many are driven by gut feel and intuition. The ability to reach a decision based on what we know and what we expect is an inherently human characteristic.

The problem we face now is that we have too many decisions to make every day, leading to decision fatigue – we find the act of making our own decisions exhausting. Even more so than simply deliberate different options or being told by others what to do.

Why not allow technology to ease the burden of decision-making? The latest smart technologies are designed to monitor and learn from our behavior, physical performance, work productivity levels and energy use. This is what has been called Era Three of Automation – when machine intelligence becomes faster and more reliable than humans at making decisions.

You, Me and the Algorithm

Intelligent systems use algorithms (formulas for taking in data and outputting other data) to learn patterns and behaviors from how

Google’s Artificial Intelligence Masters Classic Atari Video Games

Think you’re good at classic arcade games such as Space Invaders, Breakout and Pong? Think again.

In a groundbreaking paper published yesterday in Nature, a team of researchers led by DeepMind co-founder Demis Hassabis reported developing a deep neural network that was able to learn to play such games at an expert level.

What makes this achievement all the more impressive is that the program was not given any background knowledge about the games. It just had access to the score and the pixels on the screen.

It didn’t know about bats, balls, lasers or any of the other things we humans need to know about in order to play the games.

But by playing lots and lots of games many times over, the computer learned first how to play, and then how to play well.

A Machine That Learns From Scratch

This is the latest in a series of breakthroughs in deep learning, one of the hottest topics today in artificial intelligence (AI).

Actually, DeepMind isn’t the first such success at playing games. Twenty years ago a computer program known as TD-Gammon learned to play backgammon at a super-human level also using a neural network.

But TD-Gammon never did so well at similar games such as chess,

Facebook Experiments on Users, Faces Blowback

People don’t often get riled up about research, but when Facebook toyed with its members’ emotions without telling them, it stirred up plenty of feelings offline as well.

In June, researchers revealed in the Proceedings of the National Academy of Sciences that they had manipulated news feeds of unsuspecting Facebook users, influencing whether they felt positive or negative emotions. News of the experiment angered scores of users and privacy advocates. Before long, the journal backpedaled from its decision to publish the study with an “Editorial Expression of Concern,” admitting that participants may not have known they were guinea pigs and did not get the chance to opt out.

The experiment, conducted over one week in January 2012, looked at “emotional contagion” — whether user emotions would affect other users’ emotions online, as they do in person. Facebook used an algorithm to weed out posts with positive or negative words in the news feeds of 690,000 of its 1.3 billion users. Users who saw positive posts were more likely to post positive things than users who saw negative posts.

Privacy advocates demanded that the Federal Trade Commission investigate Facebook’s research practices since it had experimented on human subjects without their permission. The FTC, per

Turing Test-Beating Bot Reveals More About Humans Than Computers

After years of trying, it looks like a chatbot has finally passed the Turing Test. Eugene Goostman, a computer program posing as a 13-year old Ukrainian boy, managed to convince 33% of judges that he was a human after having a series of brief conversations with them. (Try the program yourself here.)

Most people misunderstand the Turing test, though. When Alan Turing wrote his famous paper on computing intelligence, the idea that machines could think in any way was totally alien to most people. Thinking – and hence intelligence – could only occur in human minds.

Turing’s point was that we do not need to think about what is inside a system to judge whether it behaves intelligently. In his paper he explores how broadly a clever interlocutor can test the mind on the other side of a conversation by talking about anything from maths to chess, politics to puns, Shakespeare’s poetry or childhood memories. In order to reliably imitate a human, the machine needs to be flexible and knowledgeable: for all practical purposes, intelligent.

The problem is that many people see the test as a measurement of a machine’s ability to think. They miss that Turing was treating the test as a

Big Brother Is Already Watching

It was Just after midnight in New York when police chased down and arrested

the suspect wanted in a potential hate crime. A gay man had been shot and killed with a silver revolver. The suspect was cooperative to a point. He gave them the silver revolver in his holster. He also gave them an ID.

Then he clammed up. When they brought him to the precinct to book him, the ID turned out to be fake. He wouldn’t tell them his real name. They couldn’t take his fingerprints.

By this time, it was just after 7 a.m. The officers called Edwin Coello, the sergeant who has led the New York Police Department’s Facial Identification Section since it was formed in late 2011. It was a Saturday, and Coello was still in his robe at home, but he pulled up a scan of the ID on his laptop and started working.

A guy cracking a case in his bathrobe sounds like something out of a cop show. On TV, police officers use technology as a kind of magic detective’s aid, pumping out important clues on demand. In the office, on a normal day, watching actual police detectives use actual technology on actual

Paving The Way For SDN Interoperability

Two new abstraction frameworks ensure interoperability between OpenFlow v1.3-enabled hardware-based switches from different vendors.

 

The beauty of software-defined networks is that they give you the freedom to program your network, down to individual flows, based on business requirements. However, too much freedom can be overwhelming.

The OpenFlow protocol provides a rich set of control capabilities, not all of which are supported by all switches. To date, SDN applications, controllers, and switches have had to sort out feature options at run-time, which has made interoperability (and predictable operation) difficult. For example, a switch typically includes one or more flow tables, which are organized as a pipeline. Currently, applications must be “pipeline aware,” which effectively makes them dependent on specific hardware.

The Open Networking Foundation and other SDN innovators recognized that some type of abstraction layer was needed to support hardware independence, and two major interoperability enablers have been developed: Table Type Patterns (TTPs) and flow objectives. These abstraction frameworks provide a foundation for full interoperability between OpenFlow v1.3-enabled switches — including hardware-based switches–making it safe for network operators of all types to start investing in SDN built on such hardware.

Sponsor video,

Why You Should Adopt A Cloud-First Networking Strategy

A recent survey of more than 13,000 developers, published by research firm VisionMobile, noted that developers (and in particular enterprise developers) aren’t always in control of the environments in which they develop and surely aren’t in control of the environment in which they are deployed. Yet despite being told that public cloud is going to eat IT, 44% of developers rated private cloud as the most popular cloud development platform, ahead of AWS (16%) and Microsoft Azure (13%)” according to the survey.

It’s easy to get caught up in the private versus public cloud argument, but the reality is that if 44% of developers are targeting private cloud, then 56% are targeting public cloud. In your org maybe it’s 50-50, or 60-40, or even 80-20. Regardless of the ratios, the reality is that the current state of the enterprise is a hybrid environment and it’s going to stay that way for a while. For a variety of reasons, organizations are maintaining a presence in the data center even as they begin to take advantage of public cloud. In the long run, the winners are going to be organizations that learn to maximize benefits while  minimizing costs across the entire

Can You Teach Creativity to a Computer?

From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?

In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.

Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?

At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.

In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.

The results show that humans are no longer the only judges of creativity. Computers can perform the same task – and may even be more objective.

Defining Creativity

Of course, the algorithm depended on addressing a central

Paving The Way For SDN Interoperability

The beauty of software-defined networks is that they give you the freedom to program your network, down to individual flows, based on business requirements. However, too much freedom can be overwhelming.

The OpenFlow protocol provides a rich set of control capabilities, not all of which are supported by all switches. To date, SDN applications, controllers, and switches have had to sort out feature options at run-time, which has made interoperability (and predictable operation) difficult. For example, a switch typically includes one or more flow tables, which are organized as a pipeline. Currently, applications must be “pipeline aware,” which effectively makes them dependent on specific hardware.

The Open Networking Foundation and other SDN innovators recognized that some type of abstraction layer was needed to support hardware independence, and two major interoperability enablers have been developed: Table Type Patterns (TTPs) and flow objectives. These abstraction frameworks provide a foundation for full interoperability between OpenFlow v1.3-enabled switches — including hardware-based switches–making it safe for network operators of all types to start investing in SDN built on such hardware.

 

TTPs and flow objectives

TTPs are an optional mechanism for enhancing usability of the OpenFlow protocol. Functioning like a control profile or OpenFlow

How Telecommunications Is Changing Work

Telecommunications allows an increasing number of organizations to operate in ways that previously could only be accomplished in person. The potential impact of this shift in work is financial, social, cultural, and environmental.
Telecommunications allows an increasing number of organizations to operate in ways that previously could only be accomplished in person. The potential impact of this shift in work is financial, social, cultural, and environmental. For employers, this means:
  • Reduced space and energy requirements, along with reduced overhead.
  • Increased employee productivity and creativity.
  • An average 30-percent reduction in the organization’s carbon footprint. (Arguably, telework has a more significant environmental impact than any other single strategy. See the concept paper (628 KB PDF) that BetterWorld Telecom commissioned on this topic from the Bainbridge Graduate Institute for more details.)

Here’s one example, from Cisco’s implementation of teleworking:

  • Background — Implemented teleworking an average of two days a week for 2,000 employees
  • Profits — $277 million in saved costs
  • People — 80 percent of workers surveyed said teleworking improved their quality of life
  • Planet — 47000 tons of carbon saved through teleworking; $10 million/year in saved fuel costs for employees

Yet a successful transition from a traditional work

Juniper Debuts Unite Architecture For Campus Networks

Juniper Networks today launched a new architecture for campus and branch networks with a new fabric that aims to simplify enterprise network management as companies shift to cloud services. As part of its new Unite architecture, Juniper also expanded its security portfolio with a new threat detection cloud service and additions to its SRX line of firewalls.

Altogether, Unite is designed to help enterprises have a common, converged network providing seamless and secure campus connectivity to applications wherever they sit — in a private cloud, on-premise data center or hybrid cloud.

A key part of the architecture is Junos Fusion Enterprise, which works with Juniper’s EX9200 programmable core switch to turn the campus network into a single manageable system. The fabric collapses multiple network layers to a flat tier, enabling enterprises to manage a single network across the data center and campus environments, Denise Shiffman, VP of Juniper’s development and innovation division, said in an interview.

Sponsor video, mouseover for sound
 

Juniper has other network fabrics including MetaFabric, but they’re focused on the data center while Junos Fusion Enterprise is “a way to have that flattening occur at the edge,”

Artificial Intelligence Just Mastered Go, But One Game Still Gives AI Trouble

Go is a two-player board game that originated in China more than 2,500 years ago. The rules are simple, but Go is widely considered the most difficult strategy game to master. For artificial intelligence researchers, building an algorithm that could take down a Go world champion represents the holy grail of achievements.

Well, consider the holy grail found. A team of researchers led by Google DeepMind researchers David Silver and Demis Hassabis designed an algorithm, called AlphaGo, which in October 2015 handily defeated back-to-back-to-back European Go champion Fan Hui five games to zero. And as a side note, AlphaGo won 494 out of 495 games played against existing Go computer programs prior to its match with Hui — AlphaGo even spotted inferior programs four free moves.

“It’s fair to say that this is five to 10 years ahead of what people were expecting, even experts in the field,” Hassabis said in a news conference Tuesday.

Deep Blue took humans to the woodshed in chess. IBM’s Watson raked in winnings in Jeopardy! Silver and Hassabis in 2015 unveiled an algorithm that taught itself to conquer classic Atari games. Every year, it seems, humanity waves fewer and fewer title belts over computers in the

Inevitable Trends That Could Cripple Your Network

Now that the “perfect storm” of technologies is upon us, the enterprise network is coming into its own. Networking professional often report feeling underappreciated because of the emergence of technologies like virtualization and automation. But, when examined closely, the network is on the brink of a moment of truth. Enterprises that understand the importance and value of the network will invest in the technology and leverage their infrastructure in order to build entirely new capabilities and streams of revenue. Those that don’t will fall by the wayside.

According to Verizon, the network has become “your business’ central nervous system.” We expect capabilities like ecommerce, quick tracking of products, systems and services, instant access, and cool customer apps and analytics. But none of that is possible without the network as the basic foundation. If the network isn’t up to snuff, the business services won’t be, either.

Verizon may hold this opinion partly because it makes a living running one of the largest networks in the world, but the sentiment echoes what we’ve been saying at Network Computing for years. The carrier commissioned Forrester Research to conduct a study about business challenges and IT transformation. In the resulting report, they conclude, “the network is

HP Spearheads Open Source Network OS Initiative

Linux-based OpenSwitch NOS is designed to provide data center networks with more flexibility.

 

The trend towards open networking picked up more steam today with the launch of an industry initiative to develop a new open source network operating system. Led by HP, the OpenSwitch Community is creating the Linux-based OpenSwitch NOS for data center switches.

Accton Technology, Broadcom, Intel and VMware teamed with HP on the initiative, which aims to provide data center operators with more choice and flexibility in networking gear.

Mark Carroll, chief technology officer for HP Networking, told me in an interview that OpenSwitch NOS is fully featured with Layer 2 and Layer 3 protocol support, and built to be highly programmable and reliable. It supports a variety of management interfaces, including CLI and RESTful APIs. The NOS will initially support top-of-rack data center switches.

Sponsor video, mouseover for sound
 

Carroll said the NOS is designed to provide a flexible foundation for developers to create data center environments that meet the particular needs of their business. The initial developer release of the code is available now, and HP expects it will be the second half

Human-Like Neural Networks Make Computers Better Conversationalists

If you’ve ever tried to hold a conversation with a chatbot like CleverBot, you know how quickly the conversation turns to nonsense, no matter how hard you try to keep it together.

But now, a research team led by Bruno Golosio, assistant professor of applied physics at Università di Sassari in Italy, has taken a significant step toward improving human-to-computer conversation. Golosio and colleagues built an artificial neural network, called ANNABELL, that aims to emulate the large-scale structure of human working memory in the brain — and its ability to hold a conversation is eerily human-like.

Natural Language Processing

Researchers have been trying to design software that can make sense of human language, and respond coherently, since the 1940s. The field is known as natural language processing (NLP), and although amateurs and professionals enter their best NLP programs into competitions every year, the past seven decades still haven’t produced a single NLP program that allows computers to consistently fool questioners into thinking they’re human.

NLP has attracted a wide variety of approaches over the years, and linguists, computer scientists and cognitive scientists have focused on designing so-called symbolic architectures, or software programs that store units of speech as symbols. It’s an approach that requires

Segmentation A Fire Code For Network Security

New technologies like software-defined segmentation are making it easier to prevent a compromise from spreading by separating users and network resources into zones.

 

Cybersecurity panic seems to be on the rise in 2015. Hacked cars,compromised healthcare records and one of the largest breaches in U.S. history have left many people wringing their hands in anxiety.

This scenario reminds me of the reactions to the large fires of the industrial revolution and the changes that happened afterward. In 1871 a fire broke out in Chicago, America’s fastest growing city at the time. Aided by high winds, the fire jumped from building to building until roughly one third of the city was destroyed. The event received immense media attention, and large-scale fires would later affect other urban centers such as London and Boston.

At the time, many criticized the rush to industrialize or blamed the catastrophe on divine retribution for a lack of morality – sound familiar? Despite the panic, the ultimate solution to the problem was constructing buildings a little farther from each other, utilizing flame-resistant materials and implementing quick response to fires. Fire codes are meant to create an environment that limits the spread of a fire, and the concept is equally effective when

Cisco VIRL More Than A Certification Study Lab

Cisco’s Virtual Internet Routing Lab provides students and experienced networking pros with a tool to build network topologies. Here’s what to expect in terms of capabilities and limitations.

One of the most difficult things when starting out in networking is getting your hands on equipment. No one will argue against building a lab for studying, but there are better options than buying an entire rack of physical hardware. There’s the network simulation platform GNS3, which has been around for years, but the legality of sourcing software images has always been worrisome. Now, Cisco has finally answered our call for a legitimate virtual lab that can be used by both students and experienced networking pros.

For those not in the know, Cisco’s Virtual Internet Routing Lab (VIRL) is a network design and simulation environment that includes a graphical user interface, much like GNS3, to build virtual network topologies. It includes an OpenStack-based  platform that runs your IOSv, IOSvL2, IOS XRv, NS-OSv, CSR1000v, and ASAv software images on the built-in hypervisor. Using the VM Maestro GUI, you can easily create complex network topologies and have the basic router and switch configurations built using AutoNetkit.

 

Cisco VIRL comes in a