technavadmin – Page 7 – Technology Navigation Inc.

Internet Series #3: Applications basics 

Internet Series #3: Applications basics 

By Chis Newell
Founder & President

We all use the Internet daily, whether it is in the office, home or on our mobile devices. As we discussed in our previous Internet Series, the type of Internet you use is important and not all Internet backbones and suppliers are created equal. In this part of the series, we will cover applications. If the Internet is the steak, applications are the sizzle.  

The Internet based application market is vast and often complicated. Applications can range from well-known providers like O365/Teams and G Suite to more niche providers that fit a certain role within your organization. Regardless of what application you are using, your Internet provider will play a vital role in the success or failure of these applications.  

Most applications work well Over the Top (OTT) of Internet, where a connection to the Internet is needed and the application rides “Over the Top” of the Internet to function. Some examples of this in the telecommunications industry is VoIP, UCaaS/CCaaS and SIP. Other services can have a direct connection to the application over the Internet.  

Examples of this would be Internet direct connects and on-ramps to AWS, IBM, Oracle, Azure, Google, Rackspace for Cloud Compute, Disaster Recovery and Storage. Having a direct connect allows your traffic to connect directly to these applications over the Internet, which increases the connection speed, passes information more quickly, and increases security.  

Getting connected  

Applications can also communicate with one another and are interconnected through the Internet with Application Programming Interfaces (API). This allows applications to share information to assist with inter-connectivity of data throughout an organization or application.  

One of the newest technologies utilizing the Internet is SD-WAN, which is like Virtual Private Network (VPN) on steroids. Like VPN, SD-WAN traffic is encrypted, and a router can intelligently change to a secondary internet provider. However, that is where the similarity ends.  

SD-WAN can test the health of an Internet connection to the destination down to the packet level and intelligently route packets by disparate Internet providers, prioritizing traffic and keeping any live traffic registrations up if one of the connections drops. Multiple SD-WAN providers are also offering direct connects to hundreds of application providers, which assists clients in organizing and supporting their traffic strategy.  

Applications on the day-to-day  

During the COVID-19 pandemic, most businesses used collaboration applications such as Zoom, Glip, Teams, GoTo and others to communicate with our internal and external customers while working from home.  

When garbled voice, pixilation, frozen screens happen during a meeting, it is mostly due to the quality and congestion of the Internet service. While most organizations do not control which Internet provider our employees or customers use at their home, it is evident how important it is to have a quality connection to support these applications.  

When determining what Internet strategy to use for your organization it is important to determine what SD-WAN and OTT applications will be used, bandwidth requirements, and how sensitive these applications are to the quality of your Internet connection. While the applications are delivering the value, the delivery method and quality of the Internet and the underlying provider is vital to success. 

Internet Series #2: Peering 

Internet Series #2: Peering 

By Chis Newell
Founder & President

When looking at internet providers for your business (especially in the day in age of VPN, SDWAN and Cloud services), it is important to understand where your physical network communicates to other networks, often called “peering”.  

For example, if a client has Lumen services at their offices for primary IP and in their remote offices, they are using Spectrum, Comcast, Cogent and AT&T services. These are not the same network, so, how do they communicate? The networks communicate through internet exchanges where the internet traffic converges and passes traffic back and forth, or, at “peering points”.  

Some of the primary peering points where internet providers converge in the US include Chicago, NYC, LA, Dallas or Miami. There are also secondary peering points throughout the US where fewer networks may converge to connect regional internet providers and traffic in Denver, Ashburn, Houston, Atlanta, Kansas City, Detroit, Seattle and Ashburn, to name a few. 

The importance of traffic 

Traditionally the exchange of traffic is at no cost to either Internet provider, because the traffic between the two typically offsets the other (settlement free peering). However, if traffic being passed is heavily skewed in a certain direction, Internet providers will require payment to accept traffic or throttling may occur. This is important to consider when choosing an Internet provider as you do not want your traffic to be throttled or not passed due to an Internet provider conflict. 

When deploying SDWAN, VPN, SaaS and Cloud services, understanding how your traffic is being handed off to a disparate carrier is vital to the health of an application. The originating carrier wants to hand off the traffic as soon as they can to the closest peering point thereby minimizing their network congestion. Therefore, where your data is peered is significantly important.  

For example, if you have two offices in Dublin OH communicating over VPN and using different internet providers, your traffic may be traversing to Chicago, IL or Dallas, TX before returning to Dublin OH to communicate.  

This will create additional latency, hops and complexity to the communication. The same scenario goes for Cloud services. If you are using a CCaaS or UC provider without a regionalize calling solution, who has their primary nodes in Dallas and Seattle, your traffic will traverse to the closest peering points (which may be in Chicago or Ashburn) and then across the US to communicate with the primary CCaaS or UC nodes. 

Creating routes 

We have often done trace routes on IP addresses to see how many hops it takes to reach a destination. When completing a hop trace route, you will see the peering points along the way for your packets to reach a destination. At times, tables that route traffic can loop your traffic multiple times before peering with other providers.  

Clients should utilize Internet providers who keep their routes clean and efficient to benefit from faster connections and lower latency. In the day in age of mergers and acquisitions of Internet providers, we have seen the merging of networks cause massive issues with latency and traffic due to inefficient routes due to incorrect or overly complex tables. It is possible to hard code your peering or routing requests, so your traffic is not getting caught in a bad routing table or peering situation. 

Internet is just not internet. It is important to consider who your provider is, who they peer with, if they pay for peering, and where they peer, before implementing an internet solution. 

Internet Series #1: providers overview 

Internet Series #1: providers overview 

By Chis Newell
Founder & President

There are many factors to consider when choosing an Internet Service Provider (ISP). It’s not always about who has the lowest cost or largest inventory of IP routes. Connectivity, traffic flow, and network congestion are a few key factors that I mention below.  

Looking on https://asrank.caida.org/, you will see that the undisputed ISP champion is Level 3/Lumen, who is the combination of Level 3, Qwest, US West and CenturyLink networks. There is no other provider close to the size of Level 3/Lumen Autonomous System (AS) IP routes, which means that Level 3/Lumen has the most routing prefixes to the internet. However, just because you have the largest amount of routing prefixes to the internet does not mean you have the most robust overall network. For instance, Zayo was one of the first to provide 100 Gbps access to the internet.  

Understanding internet mediums  

The medium of connectivity to end points/offices can make a difference in how stable internet service can be. Internet was originally provided over copper pairs (T1, NxT1, DS3, OC3, OC48, OC192), but keep in mind that copper pairs were never invented to transmit internet data services, they were developed for voice services.  

These mediums ended up creating instability, efficiency, and packet loss issues, as well as limitation of overall speeds. Ethernet over Copper, or EoC, is an interesting way to combining copper pairs into a single connection rather than running MLPPP to bond T1’s (NxT1). Currently, the most efficient way to deliver bandwidth is over native Ethernet, fiber or microwave.  

The “flow” issue 

Internet traffic is a very intertwined business. Traffic flows over multiple networks through multiple peering points (where the internet providers hand off traffic between ISP networks) to reach its’ destination.  

For example, if Amazon uses AT&T and your office uses Verizon for their internet access, in order to get to Amazon’s services, your internet request needs to traverse from Verizon to AT&T. That hand-off between providers is called peering and only takes place at certain peering points throughout the US and world. The is extremely important when using VPN, SIP, UCaaS/CCaaS or any latency sensitive traffic.  

Some ISPs will grossly over oversubscribe their network making network congestion an issue creating packet losses and re-transmission of packets an issue. They do this to increase profitability Vs building out additional capacity. There are also features that make some ISPs more attractive than others. These include network-based firewalls and DDoS capabilities, prioritization of IP traffic and privatized WAN traffic.  

In closing, when choosing an ISP it is important to consider the factors listed above. Stay tuned for the next blog in our Internet series… 

Why Your Organization Needs Managed Mobility Services 

Why Your Organization Needs Managed Mobility Services 

By Chis Newell
Founder & President

The dynamic of the present-day workforce and culture is shifting towards mobility as more organizations are powering remote work to align with their organizational goals.  However, success is only possible when everyone is connected, even with geographically dispersed employees.  

With Managed Mobility Services, employees can perform work-related tasks and collaborate, even outside the office.  First, let us look at what managed mobility solutions are and what they can do. 

What Are Managed Mobility Services? 

Managed Mobility Services, or simply MMS, entail the procurement, provision, and management of smartphones, tablets, and other devices that integrate cellular or wireless connectivity to enhance collaboration across hybrid workspaces.  However, ensuring that MMS solutions are efficient and cost-effective requires strategic planning to keep the workforce productive and at optimal performance.  

That’s where Managed Mobility Services Providers (MSS providers) step in – to design, deploy, manage, monitor, and optimize the organization’s mobile ecosystem throughout the entire mobility lifecycle.  

What Can Managed Mobility Service Providers Do? 

Streamlined Device Management 

By managing all aspects of the mobile environment, MMS providers will not only devise a good plan for staging and deploying devices, they also determine the best way to control and expand the cellular network.  This is done while following a core set of practices that securely give employees access to the organization’s data.  

Moreover, Managed Mobility Service providers also consider logistical aspects that organizations often overlook to facilitate seamless connection and collaboration. 

Make Data-Driven Decisions 

Data is the most critical component of every organization, and keeping track of inventory will allow organizations to see what’s working and what can be improved.  Mobility solutions give organizations access to different data types for every touchpoint of the mobile ecosystem, allowing the ability to make data-driven decisions that contribute to cost savings and streamlining processes. 

Closely Monitor Organizational Systems 

Internally managing daily mobility operations is highly taxing.  It requires keeping track of piles of data that IT departments have to thoroughly review to keep track of performance, opportunities, potential problems, and more.   As a result, MMS providers can take over the daily mobile device management efforts.  This eases the burden on the IT team while dedicating them to areas they are well-versed.  

Get Access To Comprehensive Support 

It is essential to choose the right MMS provider for the organization.  When the mobility infrastructure is not performing its best, organizations can quickly access dedicated MMS support teams who will work as an extension of the organization to formulate the best solution.  However, it all depends on skillset, familiarity with the specific deployment, and the magnitude of support offered.  

Our Expert Recommendation? 

Mobility has become necessary as more organizations employ remote employees.  MMS solutions give employees a lot of flexibility by allowing them to work together even if they are geographically displaced, increasing their productivity while moving towards goals.  

If you are considering Managed Mobility Services for your organization, connect with us to discover how we can help.  

Multi-cloud Strategy: the more, the merrier 

Multi-cloud Strategy: the more, the merrier 

By Chis Newell
Founder & President

Cloud technology has been highly adopted by businesses across the country. According to Gartner, 85% of enterprises will adopt a cloud first strategy by 2025.  

What comes next? Cloud diversification: the process by which a business uses multiple cloud environments (Hybrid, Private and Public) to house everything from software applications to workloads to assets to redundancies. This multi-cloud strategy is a simplistic concept of using multiple vendors for security, flexibility, redundancy, and cost savings.  

Having all your eggs in one basket is seldom a good idea for any enterprise. By using more than one cloud service, companies can augment their organization’s ability to stay online, all while lowering costs and maximizing the different strengths of different cloud environments.  

Security & Uptime 

Anytime a company is putting its assets into a cloud environment, it is taking the risk of attacks from cybercriminals, hackers, and unexpected downtime. By having different resources on disparate cloud services, even a distributed denial of service (DDoS) attack won’t be able to shut down your business entirely. If one cloud goes down for legitimate or criminal reasons, the rest can shoulder the load until the attack is rebuffed.  

According to Gartner, the average cost per hour of downtime for a company ranges from $140,000-$540,000. That is not a hit most companies can afford to take. Wrapping multi cloud security posture will also enable clients to limit a security event to a “north / south” penetration and not “east / west”, effectively containing the issue until it can be resolved.  
 

Flexibility 

Cloud-hosting providers have diversified their product offerings from simple storage environments to dedicated heavy processing private clouds, hyperscale clouds like AWS and Azure, community share resource clouds and so on.  

Different parts of your organization will have different requirements for workloads, and there is no point in overpaying for something you’ll never use. Picking and choosing from different cloud vendors to find the best match for each part of your business is the smarter play. Using multiple cloud providers and product offerings is becoming the norm.  
 

Catastrophe-Proof 

An acronym no IT person wants to see is SPOF – single point of failure. It can be a flaw in design without the proper, implementation, or configuration. If one SPOF goes down, it takes everything down with it. Think the Death Star from the original Star Wars film; one well-placed Luke Skywalker proton torpedo and the whole place turned into an ashtray.  

A well designed multi-cloud strategy keeps a SPOF from taking an environment down at any point. There have been a few cloud companies that have unexpectedly gone out of business or locked their clients out of their environment in the past decade.  

Most recent well known example of this was when AWS suspended Parler in 2020, effectively taking down the conservative social media site. Albeit this is an extreme example, it shows the importance of having a well-designed multi-cloud strategy.  

Conclusion 

Diversity and functionality are the goals of every IT organization., By engaging a multi-cloud strategy, companies can keep costs and security threats down while raising their ROI by choosing the right cloud design for all their needs.