How Network Complexity Killed Water Cooler Collaboration

by Grant Ho | NetBrain Technologies

Reprinted from The VARGuy.com

Effective collaboration could once be defined as hanging around the company water cooler discussing the latest network issues and emerging trends. Thanks to the ever-increasing complexity levels and scale of today’s networks, however, these casual conversations can no longer be classified as an effective method of information sharing. As network size and complexity increase, so do network teams. Enterprise networks are no longer operated by small teams in a single location, but teams of varying technical skills and diverse geographies. If network teams want to be on the same page as the rest of their IT counterparts with the ability to respond quickly to a network issue, new forms of collaboration and information sharing are required.

Out of sight, out of mind?

The new era of collaboration requires an effective strategy that ensures vital information is shared across teams. However, in a recent NetBrain survey, 72 percent of network engineers cited a lack of collaboration between teams, specifically network and security teams, as the number one challenge when mitigating an attack. Due to increasing network complexity, these teams have become more siloed, making ongoing communication difficult. This becomes problematic when a network outage arises and teams don’t know how to jointly respond as they have little to no experience working together. The result? Hours wasted on communicating issues that should be standard procedure rather than swiftly addressing and repairing the problem.

Many network teams are combatting this issue with a multi-phase approach to improve collaboration, process and tools. When it comes to the network, automation is a critical enabler for all stakeholders by providing the ability to share domain expertise and operational data during network problems.

Democratize knowledge

The simplest form of collaboration is knowledge-sharing. This means making sure that everyone tasked with managing the network is equipped with the appropriate information to perform their job optimally. While it seems simple, the approach can be a significant challenge for any enterprise network team.

Today, teams struggle to document and share knowledge as the process is time consuming and tedious. This limits the ability to scale as critical network information is often stored in the brains or hard drives of tribal leaders who have worked on a specific network for many years. The domain knowledge is far too deep. While tribal leaders have spent years honing their skills and learning the ins and outs of their networks, organizations can be at an advantage by ensuring more network engineers are equipped with similar levels of information. For instance, what happens when a busy, senior Level-3 engineer isn’t around to troubleshoot a network outage? Democratizing her best practices so that more junior engineers (i.e., Level-1 and Level-2 engineers) can diagnose the problem, instead of waiting and escalating all the way to the Level-3 engineer, can result in quicker response times and better SLAs.

Streamline data sharing

While sharing best practices is critical, collaboration is more than just a clear picture of how to do the work. Sharing is also crucial at the task level where insights and conclusions should be made as a team. However, organizations often struggle with this process—many network teams communicate via email or web conference, and here, data sharing becomes cumbersome and comes in log files or data dumps.

Drawing key insights and actionable decisions from a data dump is difficult. Even if an individual has the right insight he or she needs for the task at hand, it can be time consuming and tedious to work through. These manual methods of data collection and sharing (e.g., box-by-box, screen scraping or legacy home-grown scripts) result in slower troubleshooting and a longer mean time to repair (MTTR). Take the example in a typical network operations center. Here a high degree of redundant work can happen as Level-3 engineers often have to repeat the same tasks as Level-2 engineers, and Level-2 engineers have to do the same with Level-1 engineers. The culprit is largely a poor flow of information disguised as incomplete documentation at best and incorrect documentation at worst. Instead, by providing network teams with a common visual interface—for instance, a map of the network’s problem area—they can access the most relevant data while utilizing shared insights to accelerate decision-making.

Security through collaboration and automation

While collaboration is critical to network troubleshooting, it becomes particularly essential when the network comes under attack. During a security incident, the network team typically works with the security team, the applications team, and related managers. With so many stakeholders involved, centralized information becomes imperative. That’s why it’s critical to democratize best practices and seamlessly share information to drive shorter repair times and better proactive security.

Again, automation plays a key role. For instance, by automating the creation of the exact attack path, network and security teams can quickly get on the same page by gaining instant visibility into the problem. Moreover, when diagnosing the problem, automating best practices contained in off-the-shelf playbooks, guides, and security checklists is essential. Digitizing those steps into runbooks that can be automatically executed—and capturing runbook insights so they can be shared across network and security teams—results in faster responses and less human error. As shown in the graphic, these runbooks can then be enhanced with lessons learned from the security event to improve responses down the road. As networks are increasingly at risk, organizations that learn from the past to improve their future will be at an advantage when it comes to mitigating future threats.

The bottom-line is that the scale and complexity of networks is changing how organizations respond to network issues and security threats. Automating critical data-sharing will foster better collaboration and results than the water cooler ever did.

About the Author

Grant Ho is an SVP at NetBrain Technologies, provider of the industry’s leading network automation platform. At NetBrain, he helps lead the companies’ strategy and execution, with a focus on products, events, content and more. Prior to joining NetBrain, Grant held various leadership roles in the healthcare IT industry and began his career as a strategy consultant to wireless and enterprise software companies. You can follow Grant on Twitter @grantho and you can follow NetBrain @NetBrainTechies.

Advertisements
Posted in Decision Making, Modeling, Society | Tagged , , , | Leave a comment

Complexity and AR (Augmented Reality)

I have commented on this blog on several occasions about the fact that we are living in an increasingly complex world.  Fields of knowledge, from medicine to social sciences, to many others, are ever-expanding.  According to a 2013 IBM article by ,  2.5 quintillion bytes of data are created every day. This data comes from sensors, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few.  Connections and linkages between data points, the true source of complexity, are also expanding.  Google can link people to places through their phones, and how long they stayed at each place.  Amazon, Facebook, and Google understand people and their interests based on data they collect through interactions.

The proliferation of data and how they relate to one another makes it more and more difficult for us to find and understand what we need to know, and how to make sound decisions.  We need tools to help us take advantage of this data and inform and educate us.  This is where AR (augmented reality) comes in.

AR is a relatively new concept seeking to overlay digital components on top of q real scene.  This can be done through viewing glasses or a screen where objects or information is presented on top of a live or still view.  Large companies like Facebook, Google, Apple and Microsoft are each embracing this general idea with different objectives and perspectives.

I came across postings by Luke Wroblewski (LinkedIn), a product director at Google. Luke has begun to describe a conceptual approach to AR where the digital overlays are designed to serve specific functions by leveraging contextual data (in this instance, data known to Google.)  In this example, the AR algorithm would have (this is a mock-up) recognized that the driver needs to find a gas station. The AR platform would then overlay the respective price differential and the distance of alternative stations to the one in sight.  To me, this is a great example of how AR can help manage complexity.  The AR platform would distill the inherent relationship between cost and distance.  This relationship between cost and distance is at the heart of the “complex” decision that the driver must take:  What are the risks of driving further to save money?  Do I have time? Do I have enough gas in the tank?  Do I really know how little gas is in the tank?

Wroblewski and Google are onto something here.  “Representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multi-resolution data across application domains” to quote Meyer Z. Pesenson at Al. in their paper “The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch.”

AR may be more than a hammer in search of a nail. It may be a new conceptual approach to help us deal with big data and its complexity.

Posted in Decision Making, Products, Society | Tagged , , , , , | Leave a comment

Insight into Brain’s Complexity Revealed Thanks to New Applications of Mathematics

Re-posted from the European Union CORDIS

The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. The discovery of a mathematical framework to describe the emergent behaviour of the network in terms of its underlying structure comes one step closer.

© Chesky, Shutterstock

Insight into brain’s complexity revealed thanks to new applications of mathematics

The need to understand geometric structures is ubiquitous in science and has become an essential part of scientific computing and data analysis. Algebraic topology offers the unique advantage of providing methods to describe, quantitatively, both local network properties and the global network properties that emerge from local structure.

As the researchers working on the Blue Brain project explain in a paper, ‘Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function’, while graph theory has been used to analyse network topology with some success, current methods are usually constrained to establishing how local connectivity influences local activity or global network dynamics.

Their work reveals structures in the brain with up to eleven dimensions, exploring the brain’s deepest architectural secrets. ‘We found a world that we had never imagined,’ says neuroscientist Henry Markram, director of Blue Brain Project. ‘There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.’

As the complexity increases the algebraic topology comes into play, being a branch of mathematics that can describe systems with any number of dimensions. Researchers describe algebraic topology as being like a microscope and a telescope at the same time, zooming into networks to find hidden structures and seeing the empty spaces. As a result, they found what they describe in their paper as a remarkably high number and variety of high-dimensional directed cliques and cavities. These had not been seen before in neural networks, either biological or artificial and were identified in far greater numbers than those found in various null models of directed networks.

The study also offers new insight into how correlated activity emerges in the network and how the network responds to stimuli. Partial support was provided by the GUDHI (Algorithmic Foundations of Geometry Understanding in Higher Dimensions) project, supported by an Advanced Investigator Grant from the EU.

For more information, please see:
CORDIS Project website

Source: Based on project information and media reports
Posted in Modeling, Society | Tagged , , , , , | Leave a comment

Howard Raiffa, Harvard Professor, decision analysis pioneer, dies at 92.

From the Harvard Gazette:

Howard Raiffa, the Frank P. Ramsey Professor Emeritus of Managerial Economics, died July 8 at his home in Arizona following a long battle with Parkinson’s disease.  Raiffa joined the Harvard faculty in 1957. With a diverse group of Harvard stars that included Richard Neustadt, Tom Schelling, Fred Mosteller, and Francis Bator, Raiffa would form the core of what would be the modern Kennedy School (HKS) in 1969, and played a central role in the School for decades as a teacher, scholar, and mentor. Together with colleague Robert Schlaifer, Raiffa wrote the definitive book developing decision analysis, “Applied Statistical Decision Theory,” in 1961. He also wrote a textbook for students like those at HKS, and a simpler, popular book on the subject.

“Along with a handful of other brilliant and dedicated people, Howard figured out what a school of public policy and administration should be in the latter decades of the 20th century, and then he and they created that school,” said Raiffa’s longtime friend and colleague Richard Zeckhauser, Frank Plumpton Ramsey Professor of Political Economy.

“Despite his great accomplishments as a teacher and scholar, those who knew Howard well treasured him for the generosity of his spirit, his great warmth, and his desire to always be helpful, whether fostering cooperation among nations, choosing where to locate Mexico City’s airport, or designing a curriculum for teaching analytic methods.”

This combination of work marks Raiffa as a model for the Kennedy School: His scholarly analysis advanced experts’ understanding of many important questions, and he also knew how important and valuable it was for him to speak to the broader world.  In particular, he recognized that the methods he had pioneered and mastered could be helpful to people with much less sophistication, and he reached out to help them.

“Howard was a giant in the history of the Kennedy School and a towering figure in the fields of decision analysis, negotiation analysis, and game theory,” said HKS Dean Douglas Elmendorf. “All of us who are associated with the Kennedy School are greatly in his debt.”

More on Howard Raiffa:

Video | Posted on by | Tagged , , , | Leave a comment

Are we living in a simulated world?

That’s right!, there is a debate going on in some scientific and not so scientific circles whether we are living in a simulation, just like Super Mario.  Warning: this post may introduce a bunch of ideas you may have never heard about…

Alice-matrixWhere did this idea come from?

In 2001, Nick Bostrom, a Swedish philosopher based at the University of Oxford, came up with a paper entitled  “Are You Living In a Computer Simulation?“. He actually published it in 2003.  The paper is in part inspired by ideas that developed in the science fiction, futurology and philosophy world, including post-humanism, “big world”,  and terraforming. Let me explain quickly what that is about… some of this stuff is a bit “out there…”.

  • Post-humanism is about what follows the human race, or what evolves from the human race. You could think of intelligent cyborgs, or robots that would take over from us.
  • Terraforming is about conquering and establishing human life on other planets.
  • The Big World is a universe with macroscopic superposition, where entire worlds like us are superposed onto one another.

Bostrom proposed that there is a significant chance that any of the three following scenarios are true: 1) the human race will go extinct before becoming post-human, 2) A post-human society will not run simulations of its evolution, or 3) we are currently living in a simulation.  It must be one of these three, says Bostrom, and he shows that the probability of each choice is significant, i.e. not zero.  Bostrom did not start this discussion. Decades before Bostrom’s mathematical argument, folks like Jacques Vallee, John Keel, Stephen Wolfram, Rudy Rucker, and Hans Moravec explored this notion.

What do they mean by “simulation”?

Simulation means that the rules under which we exist and by which we live are controlled by a machine. The simulation is a “computer program” that makes things in our universe happen.  This implies, of course, that there is another universe, or reality, in which this controlling program and its computer exist.  I will explain below why I put “computer program” in quotes.  Of course this is a total supposition.  The simulation could have set the basic rules of life and evolution, from which the World has evolved.  Or the simulation could actually control every step of our existence and the movement of every particle known in the creation.  Is the simulation program perfect? Some writers have argued that the simulators (the ones controlling the simulation) may have the ability to erase any program errors from our memory, or that we could not even perceive these program errors if they exist…

Why is this idea even debated these days?

This topic is debated in pretty serious circles, like the Isaac Asimov Memorial Debate, recently held at New York’s Hayden Planetarium.

We live in an increasingly digital world, where computers are getting faster, bigger and cheaper all the time.  Video games computer-generated movies (simulations) are becoming more life-like.  Computer-driven machines (post-human robots?) are becoming more human-like.   Movies like the Matrix and the Truman Show have popularized the notion of outside worlds, or worlds within worlds.

But there are other interesting developments in the world of physics that make this idea intriguing.  Over the centuries, we have tried to explain the world around us through science.  We have built tools to observe our world further and further, within and out.  We discovered and proved the existence of atoms and electrons in 1897. In 1932, we found about neutrons.  In 1962, physicists started to talk about quarks.  We are now looking for evidence of the famous Higgs boson. It takes very sophisticated and expensive tools such as linear accelerators and the Large Hadron Collider (LHC), the world’s largest and most powerful particle collider, to study what our universe is made of.   At the same time, we are learning about dark matter and dark energy that may explain how the universe is expanding.

Evidence increasingly points to the possibility that our world may be made of waves and bits. Matter, the one we see around us, may also ultimately be bits, wave and information. One of the first to advance such a theory was Edward Fredkin, a professor at MIT, then Boston University and Carnegie Mellon.  Back in the sixties, he came up with the theory of digital physics, postulating that information is more fundamental than matter and energy. He stated that atoms, electrons, and quarks consist ultimately of bits–binary units of information.   So this is not a completely new story.

Mathematicians and philosophers are exploring such theories because our scientific tools are not capable of looking smaller or further for now.  So, if we are starting to believe in a digital world made of ones and zeros, it is no longer a giant leap to think of it as a giant computer, or being driven by a computer program.  Take a look at Kevin Kelly’s 2002 article, “God is the Machine.”

Simulation: yes or no?

So, are we living in a simulation? There are a lot of discussions and arguments out there about whether we are in a simulation or not.  Some people argue that it would take too much energy to run the simulation of our world and perhaps other simulations running at the same time.  Imagine walking on a beach…every grain of sand would have to be part of this simulation, moving in and out of the ocean.  Think about every encounter and every discussion with another human being being pre-scripted..

My personal view is that this idea that we live in a simulation is improbable. One main reason is that all of these ideas and concepts are the product of our language and our ability to reason using language.  Mathematics is another form of language and reasoning. Language and conceptual reasoning are the product of our limited human abilities.  The concept of “simulation” is something that we can grasp. But what about a million other concepts that we cannot grasp or formulate.  What about a million other forms of intelligence out there.  Zeroes and ones, the alleged basis of our universe, are mere human inventions.  It is possible that we live one of several forms of reality, but the “simulation” idea is simplistic to me.  There is likely something out there so different that we cannot even express with our limited intellect or language…

Along the same line (or on the other hand)…  describing the universe as pure information may also be a simplification.  Life as we know it runs on some fundamental rules: the first one is reproduction, the concept that life has a built-in reproduction mechanism,  the second concept is that of healing. Most living organisms are able to sense when they are hurt or attacked and are able to react with a plan to heal themselves. The third rule of life is that organisms evolve and adapt to their environment in order to survive.  Something, somehow, has come up with these rules…

Posted in Decision Making, Mathematics, Modeling, Society | Tagged , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Organizational complexity is taking a toll on business profits

Last July and August 2015, the Economist magazine conducted a survey of 331 executives. The survey focused  on the impacts of organizational complexity and the efforts companies have taken to reduce it.

Complexity in Large Organizations

orgcomplexLarge organizations are viewed as dynamic networks of interactions, and their relationships go well beyond aggregations of individual static entities. For instance, these organizations may operate in many countries and manage a vast number of people and brands.  But the perception of complexity can also stem from a lack of employee role clarity or poor processes.

Unwieldy complexity often results from business expansions or bureaucracies that unnecessarily complicate a company’s operating model, leading to sluggish growth, higher costs and poor returns.  The Economist survey found that over half (55%) of businesses suffer from organisational complexity which is perceived to take a toll on profits.   Most affected by complexity is general management, followed by employee relations and customer service.  Over one third of executives said that they are spending up to 25% of their time managing that complexity, and almost one in five is spending up to 50% of their workday dealing with complexity.

Two kinds of complexities

Experts believe that complexity in the organization comes from two sources:  The complexity arising from the outside World, its dynamic and unpredictability, and a company’s internal complexity caused by poor processes, confusing role definitions, or unclear accountabilities.  The most damaging kind of complexity come from within.  In a survey of 58 companies, Mocker and Jeanne Ross, Director and Principal Research Scientist at CISR, found that those companies that are able to create value from product complexity while maintaining simple processes were outperforming others.

According to Mocker, companies can then boost value and organizational effectiveness through a combination of tree sets of actions:

  • Break the separation between those dealing with product complexity and those dealing with process complexity,
  • Design processes and systems to cushion internal impacts of complexity by eliminating silos, and
  • Offer customers ways to deal with increased complexity by offering personalized choic es for instance.
Posted in Decision Making | Tagged , , , , , , , ,

John Holland, ‘Father of Complexity,’ Dies at 86

Reposted from https://reason.com/blog/2015/08/31/john-holland-father-of-complexity-dies-a

Pioneer in complex adaptive systems passed away earlier this month

by |Aug. 31, 2015 7:15 am

Scott Page writes at the Washington Post:

Holland was fascinated with von Neumann’s “creatures” and began wrestling with the challenge and potential of algorithmic analogs of natural processes. He was not alone. Many pioneers in computer science saw computers as a metaphor for the brain.

Holland did as well, but his original contribution was to view computation through a far more general lens. He saw collections of computational entities as potentially representing any complex adaptive system, whether that might be the brain, ant colonies, or cities.

His pursuit became a field. In brief, “complex adaptive systems” refer to diverse interacting adaptive parts that are capable of emergent collective behavior. The term emergence, to quote Nobel-winning physicist Phil Anderson’s influential article, captures those instances where “more is different.” Computation in the brain is an example of emergence. So is the collective behavior of an ant colony. To borrow physicist John Wheeler’s turn of phrase, Holland was interested in understanding “it from bit.'”

Read the rest of Page’s write-up at the Post.

Readers interested in introducing themselves to Holland should read Signals and Boundaries: Building Blocks for Complex Adaptive Systems, which applies the ideas of complexity to biology, markets, and even governments, and vice versa.

Other Resources

Slideshare

Articles

Posted in Definition, Healthcare, Society | Tagged , , , , , , , , , , , , , , , , , , , , , , , ,