Articles
Sample Articles from Bob Wallace.
Read More about Bob.
Cutting Through the Fog Surrounding Private 5G Networks
- Details
- Published on 06 September 2021

(Source: Pixabay)
Cutting Through the Fog Surrounding Private 5G Networks
About a year ago, Deere & Co. grabbed headlines for entering the Citizen’s Band Radio Service (CBRS) auction and buying licenses for spectrum. It plans to use the spectrum in private 5G networks in manufacturing facilities and to enable intelligent agriculture, both in rural areas that are not tops on operators’ rollout lists.
Since the auction, it seems, everyone in the fast-evolving wireless networking ecosystem, except Deere, has talked endlessly about their private 5G network plans: wireless operators, network integrators, equipment vendors, and spectrum license owners. And all will team with partners to deliver the next-gen networks.
Defining “private 5G network”
The pressing question is how enterprises define “private” 5G networks, this at a time when providers express interest in staying involved after the network is built by selling management, monitoring, security, multivendor systems integration, and oversight services to enterprises. These options are in sharp contrast to the time-tested DIY private net approach that Deere seems driven by.
Operators clearly want in on the private 5G network opportunity by offering managed network options, with companies offering network operations centers (NOC) from which to provide the above-mentioned services, as well as SLAs.
Enterprise-handled?
Do we assume that enterprises handle configuration and ongoing configuration management, performance management, and setting service levels? These foundational tasks must be high on the decision list for those seeking private networks.
Once enterprises determine the ROI of a private 5G network is better than that associated with using an operator’s public 5G service, which comes down to the application and geographic coverage issues, the options are aplenty.Enterprise IT needs to decide exactly how much, if any, of a private 5G network it wants provided and managed. They must also decide where they stand on top issues and which assets they currently possess.
The checklist
Specs and standards: In an emerging ecosystem, it's always wise to consult with relevant industry and standards-making bodies to determine what has been defined, what's pending, and what important areas ate TBD. Whether it's industry associations and/or standards-crafting bodies, enterprises, and the entities they need to make private 5G networks are on the road to making these high-speed nets a reality.
Talent: Does your enterprise have staff with the knowledge and skills needed to handle and manage the private 5G network? If not, it’s available for hire or pay from independent contractors, in some cases those firms that helped U.S. operators with their current 5G public network rollouts.
Equipment partners: In the last year, network operators have announced partnerships with an array of wireless equipment vendors such as Ericsson, Nokia, and Siemens AG, to team to build private 5G networks, most of which are already working with enterprises themselves on pilot tests to gain crucial experience with the opportunity. These engagements are underway around the globe.
Security: Enterprises current on 5G are aware the technology in and of itself is secure. However, this make-or-break responsibility may be beyond the skillsets of some users' corporate IT and related departments. These firms can engage third-party entities that understand their business, applications, performance levels, and goals.
Spectrum: The CBRS auction enabled Deere, Chevron, several power companies, and manufacturers to buy licenses for 10 MHz slices of spectrum in the shared usage band. There are mechanisms – spectrum access systems - in place to prevent interference in the CBRS spectrum. Companies such as Federated Wireless provide spectrum to enterprises -often as part of network solutions - since the CBRS auction closed.
In August, Federated and Anterix, a holder of spectrum in the 900 MHz band, announced a partnership to provide a dual-band offering to enterprises in the utilities vertical industry. IT managers should expect more vertical-specific solutions providers - with valuable knowledge of focused applications and experience working in specific markets - to emerge.
Design, construction, and management: Options for private 5G nets are emerging. For example, Betacom, a longtime design and deployment partner to AT&T, T-Mobile, and Verizon, launched a fully managed private 5G network service in late May. The company handles network design, deployment, and management for the offering. Betacom explains enterprise customers retain ownership of the network and local control of their data. Management is provided by its NOC, which is hosted on Microsoft Azure.
Exploring the wireless WAN: For those enterprises looking for a managed, high-speed connectivity without owning all the components, Boise, IA-based Cradlepoint offers wireless WAN managed service that can include edge networking gear such as 5G routers. Cradlepoint was acquired by Ericsson in September 2020.
Time and timing
Given the early state of the private 5G network ecosystem, IT managers and their businesses aren't hard-pressed to make a quick decision. This can be a positive since many of the players in the ecosystem haven’t established a track record in the space per se.
Those operators, vendors, integrators, and service providers seeking entry into the ecosystem have made news with partnerships, initial services, and tests with household names in the U.S. and abroad; actual commercial deployments are largely in the early stages.
This leaves plenty of time for enterprises to evaluate all aspects of the private 5G network opportunity. The menu of options is under construction.
Rising Upload Speeds Drive Symmetric Internet Access Rollout
- Details
- Published on 23 August 2021

(Source: Pixabay)
Rising Upload Speeds Drive Symmetric Internet Access Rollout
With two large spending efforts focused on spreading broadband far and wide in the U.S., operator demand for symmetrical Internet access portends to feed American businesses a high fiber diet. Though Internet access has long been dominated by asymmetrical services with their far higher download than upload speeds, today’s work from home workforce is driving the need for symmetrical services. And fiber’s ability to deliver matching speeds has caught the attention of businesses, operators, and vendors worldwide.
Two drivers
Why? Two reasons. First, not unlike the country’s transportation infrastructure, water systems, and schools, much of the nation's networking infrastructure could benefit from modernization and expansion.
Researchers see record operator spending on 10G-Passive Optical Network infrastructure of almost all sizes. That signals operator interest in deploying and extending these fiber-based systems.
Secondly, the spread of remote work, school at home, telemedicine, and video surveillance apps requires higher upload speeds on Internet access links. Covid showed us how much we could do from home, so enterprises and service providers alike are looking to plan their infrastructure needs now with this combined growth in mind.
Future-proof rural broadband?
And while fixed wireless, satellite, coaxial cable, and fiber can all deliver broadband to businesses large and small in urban areas or underserved rural regions, carrier spending shows that fiber (the most expensive and demanding to deploy media) is a favorite for its ability to support symmetrical Internet access now and in the future.
Though massive FCC and Federal broadband deployment spending efforts don’t specify a media for operators, the need for links with matching download and upload speeds is growing.
Supporting higher upload speeds during Covid
According to the Q2 2020 OpenVault Broadband Insight (OVBI) report that some of the upstream changes were already being seen then. The firm provides data analytics and tracking service to operators. And Comcast released network usage stats in early spring, claiming upstream traffic on its network jumped 56% during 2020.
“Pandemic lockdowns changed the nature of upstream usage – in all likelihood, forever,” the more recent OpenVault report noted. “Continued high levels of remote work and a new embrace of videoconferencing for communication needs mean that consumption will pressure the limited upstream capacity of many broadband infrastructures. Moreover, the unique role of the upstream as an enabler of two-way communication makes unfettered performance essential.”
Another issue worthy of consideration for IT managers is that the uplink growth in data is growing at a higher rate than the downlink, according to AT&T SVP of Wireless and Access Technology Igal Elbaz. And conferencing (and collaboration) apps, including Zoom, require faster uplink speeds.
Those now driving the need for symmetrical Internet include:
- Students
- Businesses
- Telecommuters
- Content creators
- Gamers
As a result of the usage from these groups, even those who aren’t fiber-first operators see the building need for symmetrical Internet access.
Billons for broadband
An injection of billions in funding for the deployment of high-speed networks beyond the reach of those today gives hope that rural broadband will not forever continue as an oxymoron or as a bridge too far for IT managers. The rural broadband issue has gotten renewed attention, with IT managers having to support a home-based workforce.
Launched last year, the Rural Digital Opportunity Fund (RDOF) is the FCC’s latest step to fund service provider deployment of broadband networks in rural America. The agency is directing $20.4 billion, over ten years, to finance up to gigabit speed services in unserved rural areas, connecting millions of American homes and businesses to digital infrastructure.
But for all the plans to extend high-speed broadband to the unserved and underserved, it’s difficult to predict when infrastructure that supports symmetrical Internet access will have a solid rollout timeline.
Progress vs. Congress
It appears that few are laughing at the joke that Congress is the opposite of progress when it comes to government-funded broadband rollout as the President's American Jobs Act, which contains $100 billion for the massive undertaking, has been slowed, and his current infrastructure plan’s broadband spending cut to $65 billion thanks to near gridlock in Washington, D.C.
The President believed the U.S. could bring affordable, reliable, high-speed broadband to every American through a historic investment of $100 billion. It will be interesting to see if the reworked bill passes, what can be done for the lesser amount.
The President’s original plan prioritized building “future proof” broadband infrastructure in unserved and underserved areas so that we finally reach 100 percent high-speed broadband coverage.
Media favoritism?
Further, an industry association is upset with the FCC’s RDOF plan, claiming the agency favors fiber over other broadband media.
The Wireless Internet Service Provider Association (WISPA) laid out in late June its concerns to the FCC about the agency's awarding of the RDOF awards. The association, which includes Fixed Wireless Access (FWA) providers, believes the FCC prefers fiber deployments and that the emphasis should be on reaching unserved regions ASAP as opposed to focusing on symmetrical broadband speeds.
Rethinking Broadband Access Options as Operators Drive 10G PON Deployments
- Details
- Published on 23 August 2021

(Source: Pixabay)
IT managers stand to gain faster, symmetric services as fiber network spending climbs.
A steep spike in carrier spending on optical broadband access equipment combined with federal incentives to deliver high-speed service to rural areas is changing the broadband access game and options for IT managers.
This is increasingly being accomplished by the deployment of 10Gigabit Passive Optical Networks (G-PON), which are point-to-multipoint access systems that use fiber optic media to flexibly provide advanced services.
Why? Because operators had first provided T-1 and Ethernet services for IT managers for SMBs seeking more robust corporate networks. But cablecos won many firms back by providing cheaper and faster DOCSIS 3.0 and 3.1-based services. But now, the spurned have an advantage with the deployment of, and upgrades to, 10G PON networks.
Unlike current DOCSIS services, 10G PON-based services are symmetric (the same speed download as upload), something cablecos cannot yet match. They are essential for applications such as surveillance/security/remote monitoring that stream video.
Targeted businesses
“In addition to helping them better compete with cable in the residential broadband market, telcos’ 10G-PON fiber deployments are also aimed at reclaiming share in the small-medium business (SMB) market, where cable operators have also made substantial gains over the last decade,” explained Jeff Heynen, Vice President, Broadband Access and Home Networking at Dell’Oro Group, which conducts market research spanning telecom, networks, and data center IT industries. “XGS-PON allows them to offer symmetric, multi-gigabit services that cable operators will have a difficult time competing with until they expand fiber networks of their own.” Telcos are also expected to put the full-court press on small enterprises.
Review time for IT managers
This emerging shift in dynamics requires enterprise IT managers to explore the capabilities and potential benefits of 10G PON services. Also suggested is ascertaining from carriers their timelines for upgrades of existing plant and deployment of the new GPON infrastructure/technology.
Much of the push for the extension of broadband to underserved and unserved expanses in America is powered by the FCC's Rural Deployment Opportunity Fund (RDOF) program, which is providing carriers funding to reach these areas with high-speed services. Proposed Federal and state broadband spending programs are fueling the broadband breakout.
IT managers should expect an increase in 10G PON deployments in the U.S., even before RODF projects begin to get rolling. While these deployments are primarily for residential applications, there are many operators who see using 10G PON as their connection to small and midsize businesses (SMB) and smaller enterprises.
Spending on 10G PON equipment skyrocketed 500% in the first quarter, according to a newly published report by Dell'Oro Group. Total global revenue for the Broadband Access equipment market increased to $3.3 B in the first quarter of 2021, up 18 percent year-over-year.
Upgrading to G-PON
- Making the connection. IT managers looking to cash in on GPON services will need a standalone optical network terminal (ONT) unit, a piece of termination equipment. Another increasingly popular option is to insert a pluggable module in an existing switch at the user site.
- Service availability. IT managers will need to make certain the business services they have been using at a site are offered over the GPON infrastructure. The same goes for the service-level agreements (SLA) they had with their business services. They should check that the agreements cover everything from on the ONT on the user premises to the OLT in the telco network.
- Protection. IT managers must ascertain how traffic will be protected and secured as GPON is a shared technology. IT managers are used to having strong SLAs. That should not change. But it does require some knowledge and investigation by the IT manager to make sure that is the case.
Starting from scratch – broadband options for rural areas
For IT managers in areas lacking broadband access, the RDOF, and other expand-broadband initiatives, cannot be too sure how they will get high-speed services. The following are the primary options. Time and money are the deciding factors as carriers must balance the cost of providing broadband access along with the time it takes to reach anxious customers.
- Fixed wireless. A carrier favorite in cases where an antenna and or tower is less expensive and quicker than trenching and laying fiber, especially in un- and under-served rural areas.
- Satellite. With the re-emergence of low-earth orbit (LEO) satellites; Starlink from Elon Musk's SpaceX, Amazon, Telesat, and OneWeb, high-speed data can be beamed to locations equipped with a package of small on-premises gear.
- Fiber. A known option that is the priciest of the three (and most time-consuming in new installations), telcos need the speeds possible to get ahead of DOCSIS 3.0 cable service alternatives and be ready when cablecos step speeds up with DOCSIS 4.0 services.
Fiber for the win?
Despite the competitive options, Heynen believes fiber has the inside track when it comes to rural areas that will be addressed as part of the FCC’s massive RDOF broadband breakthrough program.
"I do think the majority of RDOF money is going to be put towards the expansion of fiber projects because of the demand to provide gigabit speeds,'' Heynen began. "However, quite a number of major projects are going to be Fixed Wireless Access (FWA)-based, largely because they address areas where fiber deployments will simply be too costly."
Operators need to balance the speeds promised with the time and cost it will take to roll out full fiber networks. IT managers should also expect carriers that won funds from the RDOF to use a combination of both fiber and FWA.
IT Chimes In: How Effective is Facial Recognition?
- Details
- Published on 23 August 2021

(Source: Pixabay)
As results, performance factors, and challenges come to light, interest spreads for IT managers.
The Olympics in Japan will use facial recognition this summer. And Disney World announced in May it will test the technology for a month to cut wait times at the theme park as part of a more touchless experience.
But before getting serious about cloud-based or on-premises facial recognition systems, IT managers must first determine whether they will meet the desired performance levels required by the enterprise in its application.
Often lost in the heated debate over whether municipalities and law enforcement entities should be allowed to use facial recognition to identify suspects in crimes is a secondary question; How effective are the AI-driven, software-based systems?
And before we can dig into early returns on performance, we must acknowledge that there are far more verticals that could apply facial recognition – beyond enterprises with access control implementations, law enforcement organizations, and large multipurpose sports venues looking beyond identifying staff.
Vertical value
- Retail. Identify shoplifters to reduce shrinkage.
- Transportation. Airport security. Reduce waiting times. Check-in kiosks.
- U.S. government. Deploy for border control uses.
- Entertainment. Security for mass events at large venues such as the Olympics, etc.
- Banking. Replace passwords with screen-based face scanning for online banking.
- Healthcare. Deploy touchless access to secure lab gear, clean rooms, and more.
- Media/marketing. Test audience reaction to movie trailers, characters in TV pilots, and optimal placement of TV promotions.
- Consumer products. Measure visual emotions through facial analysis to pick candidates for new or enhanced offerings.
- Travel. The US Department of Homeland Security predicts that facial recognition will be used on 97% of travelers by 2023.
Performance, historically
How far have facial recognition systems come? The Technology Policy Program at the Center for Strategic and International Studies (CSIS) produces non-partisan and non-proprietary research, took a look.
Facial recognition has improved dramatically in only a few years, according to CSIS blog author William Crumpler in April 2020. “As of April 2020, the best face identification algorithm has an error rate of just 0.08% compared to 4.1% for the leading algorithm in 2014, according to tests by the National Institute of Standards and Technology (NIST). As of 2018, NIST found that more than 30 algorithms had achieved accuracies surpassing the best performance achieved in 2014.”
But beyond algorithms, how are the systems faring in real-life deployments in the last few years? The answer, of course, is based on how its results are used, not necessarily measured.
Beyond the AI engine
A case in point is the New York Police Department (NYPD), which began a massive storage upgrade in 2009 to support the deployment of its facial recognition systems in parts of the Big Apple, such as the financial district. However, it is important to note that results that the entity released last year are used as one piece of a process for which its owner assesses the effectiveness.
The NYPD has long stated that any “matches” are used as only one part of an investigation. They are just leads and NOT enough to issue arrest warrants for. And before it becomes a useful lead for criminal investigators, a match from facial recognition is analyzed by a group that reviews each and weighs in on the quality – and hence viability - of the match.
Further review
“If possible, matches are identified, then trained Facial Identification Section investigators conduct a visual analysis to assess the reliability of a match and conduct a background check to compare available information about the possible match and relevant details of the investigation,” according to the NYPD FAQ.
Specifically, the NYPD uses facial recognition to compare images obtained during criminal investigations with lawfully possessed arrest photos. When used in combination with human analysis and additional investigation, facial recognition technology is a valuable tool in solving crimes and increasing public safety.
The NYPD measures the effectiveness of its facial recognition software on how it has done in terms of providing substantial leads in criminal cases. Last year, the NYPD released results for the 2019 calendar year. The organization explained that following; “(Its) Facial Identification Section received 9,850 requests for comparison and identified 2,510 possible matches, including possible matches in 68 murders, 66 rapes, 277 felony assaults, 386 robberies, and 525 grand larcenies. “The NYPD knows of no case in New York City in which a person was falsely arrested on the basis of a facial recognition match,” it added.
What affects facial recognition performance
Those in the know cite three variables that can impact the accuracy of a facial recognition system’s ability to make matches. They are the person’s pose, illumination, and expression (PIE) in the photo.
The mask factor
And perhaps not much of a concern before the global pandemic was the use of face masks. It is unclear what the long-term prospects for facemasks use after COVID-19 in the U.S and elsewhere are. But they have left their mark.
Consider this research and testing from NIST. The entity tested 89 commercial facial recognition algorithms. It used digitally applied masks and claimed to have found a 5-50% error rate in matching faces with them to photos of the same person.Vendors are hard at work improving that error rate. BBC reported that a US Department of Homeland Security "controlled-scenario test" in January found one facial recognition system with a 96% success rate - although the results of those tested "varied greatly."
Environment dependent
And while it is a safe bet that we will see less mask-wearing, save for the winter months, some facial recognition systems will stay have to deal with what is known as facial occlusions. They include longtime items as hats, sunglasses, scarves, and heavy facial hair. These items are known to weaken the performance of facial recognition systems, especially when they are used in applications that use video surveillance.
IT managers with controlled access scenarios in mind for their systems may not need to be concerned with this variable in facial recognition accuracy.The road ahead
Facial recognition systems are attracting broader interest in the form of tests and actual deployments, by users including Disney World and the 2021 Olympics. But deployments that include the municipal law enforcement entities face headwinds from those concerned with privacy and Big Brother concerns.
Taking a Closer Look at Facial Recognition
- Details
- Published on 07 June 2021

Source: Pixabay
Enterprise, law enforcement, and sports venue use expands, raising the need for IT infrastructure evaluation to support the resource-demanding systems.
Whether it is access control for employees, clearing fans to fill a stadium for mass events, or helping identify suspected criminals, users are forging ahead with the evaluation, testing, and implementation of facial recognition systems.
And while law enforcement entities have encountered headwinds in the form of privacy and Big Brother concerns, interested IT managers across industries all need to consider infrastructure requirements for facial recognition systems.
Your biometric options
For many security-conscious IT managers, key cards are giving way to the biometric family of approaches, which use speech, fingerprints, retina scans, and even breath to provide or prevent access to facilities and systems.
Another biometric approach, facial recognition, found its way from science fiction and spy movie material at the turn of the century to actual use by law enforcement, the military, and sports venues.
Early uses
Largely unbeknownst to the masses, facial recognition was tested/used at Super Bowl 35 in Tampa in 2001, just months before the 9/11 terrorist attacks on New York City and the Pentagon.
Since then, the New York Police Department has pushed to implement facial recognition but has run head-on into opposition by privacy advocates, the American Civil Liberties Union, and those fearing the big brother use of the advanced technology systems.
Seeing its value for limited access control applications, the NFL’s New Orleans Saints recently disclosed it implemented a facial recognition software system from AnyVision in December 2019. The team explained has been used to grant access to players, staff, and other workers secure, seamless access to practice facilities. Not long after, COVID-19 hit the U.S.
How matchmaking works
Facial recognition is used to identify or confirm a person's identity using their face. It can be employed to identify individuals in pictures, videos, or in real-time.
The systems use algorithms to spot distinctive details of a person's face, such as the location of their eyes, the shape of their nose, or the appearance of their chin. By combining these attributes, they can read the geometry of a face to match it to an image. Most facial recognition systems use 2D images than 3D because it is easier to match a 2D image with public photos or those in a database.
The system then converts the facial image to data. How does that work? The best explanation of this step comes to us from security software vendor Norton: “The face capture process transforms analog information (a face) into a set of digital information (data) based on the person's facial features. Your face's analysis is essentially turned into a mathematical formula. The numerical code is called a faceprint. In the same way that thumbprints are unique, each person has their own faceprint.”
This faceprint is then run through a database of faces seeking a match in what can be a huge repository of many millions of images, with the largest found in law enforcement and crime-fighting organizations.
If your image is matched with one in a facial recognition database, the system notifies authorities who decide if it is a close enough match to act on.
Infrastructure issues
The infrastructure required to support an effective facial recognition operation depends on how the technology is being used and the volume of images used to attempt matches. For example, if an enterprise is using facial recognition in an access control application to confirm the identity of employees, entities may be able to upgrade their current corporate IT infrastructure.
However, if a large, distributed enterprise is considering an implementation for all employees and workers, network and storage systems might need to be dedicated or augmented, along with computing power in the data center.
In either instance, high-speed network connections are a critical piece of the facial recognition system to ensure the low latency needed for high-volume scanning and matching in time-sensitive applications, such as access control.
Network needs
In a metropolitan law enforcement implementation, a network of surveillance cameras serves as the foundation for the facial recognition software, which pulls images stored in a large database with massive storage.
By contrast, a solution for an NFL venue could run on the arena's backbone network, with upgrades in storage in the facility's data center. In newer – multi-purpose - NFL stadiums such as Atlanta Mercedes-Benz Stadium’s IT infrastructure is a data center, which processes all data on game day, and backs that up to the vendor’s cloud that same day.
Originally used by telecom carriers to deliver triple-play service bundles to the home, a Gigabit Passive Optical Network (GPON) is used to connect a wide array of high-speed devices and systems to the data center. The Mercedes-Benz GPON uses 4,000 miles of fiber-optic cable. GPONs can handle the low latency needs of high-volume scanning and matching.
When it hosted Super Bowl 53 in February 2019, the facility’s backbone network supported:
- 15,000 Ethernet ports
- Wi-Fi equipment
- 700 POS devices
- A security system
- Keycard accessed doors
Though the NFL had used facial recognition as early as 2001, there have been evaluations of the tech by individual teams. Some are operating small image volume applications to verify the identity of these employees and players entering stadium facilities.
From staff access to fan security
IT infrastructure revamps could be in the plans should sports teams expand their facial recognition implementations beyond players, staff, and suppliers, to includes the gameday fan crush of 65,000 to 90,000 people with all rushing to quickly pass through often jammed stadium entry points.
The same goes for older stadium venues that have added infrastructure over the years to support Wi-Fi 6, 5G service, and advanced video replay systems for fans and TV broadcasters.
Sports Industry Execs Share Challenges and Approaches to Creating Analytics
- Details
- Published on 07 June 2021

Source: Pixabay
Tracing their roots to “moneyball“ 20 years ago, sports businesses believe their data-driven decision efforts can help enterprise IT.
When it comes to using tech to optimize core business processes, business-to-business (B2B) and business-to-consumer (B2C) entities are often more similar than dissimilar. And when it comes to analytics, sports leagues, teams, and federations think enterprise IT can learn from their experiences.
Think about it. Traditional enterprises, and sports entities, exist in competitive industries. Both invest heavily in data collection from their assets (business systems and workers vs. players and stadiums). And each grapples with implementing new processes, expanding remote access, and addressing security challenges, to name but a few.
Both types of businesses are focusing on enabling data-driven decisions and analytics designed to help them answer strategic questions before the application of AI and Machine Learning.
Sports entities have been implementing analytics since the turn of the century, as illustrated in the movie Moneyball. What can enterprises learn from sports analytics?
Ask the expert
Who better to ask than Christina Chase, Co-founder and Managing Director of the Massachusetts Institute of Technology’s Sports Lab.
The lab with pro-sports teams, leagues, federations, and global brands to accelerate new technologies and approaches in areas such as player identifications and development; in-game strategy, athlete health and performance, next-gen fan engagement and OTT, smart stadiums/venues; high-performance equipment, sensing and fabrication; and e-sports, explained Chase. “And they’re having the same challenges you might be having.”
The lab’s clients include FIFA, Adidas, Red Bull, Major League Baseball, winter sports equipment maker Shred, Google Cloud, The Milwaukee Brewers, and the United States Olympic Team.
(Christina Chase will be keynoting Data Center World August 16-19, 2021. Use code NWC to save $200 on registration.)
Tech first and foremost?
Maintaining that technology alone is not the solution to firms' need to analyze the fast-growing mountains of data collected from countless endpoints in business and in sports, Chase suggests organizations first identify questions that must be answered before talking tech. They are:
- Figure out what you want to answer, evaluate your organization’s capabilities/capacity, then look at solutions.
- Give yourself adequate time to vet potential solutions, including multiple contained pilots before selection.
- Understand how this new data will integrate into existing data streams and processes and how it will be used.
Chase emphasized that without answering these questions, users will find that more data does not mean better insights and that tech is not the answer but rather a tool. And better technology does not mean an enhanced outcome.
A case in point: FIFA’s VAR
Soccer league FIFA embraced the MIT principles in its efforts to determine how to help the on-field referee eliminate clear errors in officiating matches. The data-driven decision was to create a Video-Assisted Referee (VAR), an additional referee, located off the field. The VAR reviews game calls, penalties, goals, and more. The VAR has access to video on a tablet device and has direct voice communications to the chief referee on the field.
FIFA included stakeholders - the referees - in the process early on, explaining that the VAR would cut errors in calling the match by adding video review. It was not designed to replace their game calling.
FIFA discussed and tested different methodologies with referees. The input from the VAR was not intended to be the final outcome for the on-field chief referee but rather to help eliminate clear errors in calling the match.
The next steps included determining how to collect and move video around the venue and how to include this with other data streams, in part to help referees render a review that could be included without adversely disrupting the live television broadcasts of FIFA matches – and coverage on screens for fans in the stands.
Although an earpiece was considered for communication, taping the ear was not deemed an effective way to inform players, coaches, and fans at home and in the stands know that a play was being reviewed. Instead, referees began drawing an imaginary box in the air, so all parties knew of the review.
The VAR, which was used at the 2018 FIFA World Cup at www.FIFA.com/VAR. was a big change beyond eliminating clear mistakes. Instead of players running up to the referee who had blown the whistle to debate the call, they knew from the box signal the call was being reviewed by the VAR.
What do sports operations collect?
Sports enterprises currently collect rivers of data from athletes on the field, from RFID chips in shoulder pads in the NFL to data from sensors in race cars and the suits of their drivers. These entities collect this data to understand and optimize the performance of their most precious assets, not unlike corporations.
Data is also collected from these athletes off the field and track in hopes of better understanding and optimizing the training, conditioning, and health status of these same human assets. This has lead to the development of wearables that monitor and deliver specific health information to trainers and equipment makers.
What about remote workers?
The remote worker surge is turning into a tidal wave thanks to COVID-19, during which the lion’s share of businesses were required to have employees work from home. Remote access is nothing new to businesses, but the scale at which it was done is still unprecedented. This required a rethinking of the way employees work (and how education is delivered) for most businesses.
Offices went dark while Zoom, Microsoft Teams, and other collaborative tools filled the gap.
The global operation challenge
Goals achieved and successful solutions used by international sports organizations can be used as a model for global enterprises and provide the consistency required across countless, far-flung locations and remote and mobile workers. Data collection process standardization achieved with FIFA's VAR simplified rollouts across venues in the sports league.
In sports, racing teams were already supporting remote workers in fixed locations as well as entire race operations that travel to international races. This included everyone needed to field and run a racing team. That includes the pit crew, technicians, support staff, the vehicles, the supplies, and, of course, the drivers.
Avoiding data loss
Securing endpoints is a seemingly never-ending battle. Supporting fixed remote and mobile assets is one tall task, but securing all these diverse and far-flung network entry points is of paramount importance in the sports business and for traditional enterprises as well. Organizations in both groups must do so to protect data transmission and collection and to prevent against loss of their prime asset, intellectual properties (IP), in fiercely competitive industries.
Case in point: Going global with Williams F1 Racing
Facing these challenges, Williams F1 Racing chose to partner with DTEX Systems, Inc., which handles empowers the virtual workplace through its endpoint data loss prevention (DLP) systems. The vendor’s offering has enabled the racing company to meet the demands of the operation before and during Covid times, explained Al Peasland, Head of Technical and Innovation Partnerships at Williams F1 Racing.
“We travel to races in over 20 countries a year in what is a fast-paced business,” explained Peasland. The company has roughly 700 employees at its U.K. headquarters, as well as remote fixed workers and remote mobile workers, in addition to its country-hopping road race crew. At a race, everyone from the pit crew to the driver and the group that builds the garages for the cars needs access to data, and since there’s no such thing as a 9-5 day, some are coming in from a hotel network after hours.
Once at the track, the team needs become more complicated as the staff collect data from sensors throughout the race cars and their drivers, according to Peasland. Further still, sensors on drivers’ suits that collect data from the driver up until the race are removed before races to reduce the weight of the car and increase its performance.
The racing company uses a system from DTEX for workforce analytics and security to identify risky activities and employees. Enterprises in other competitive industries, such as pharmaceuticals and manufacturing, also want to track data to the extent of ensuring employees don’t take IP information or secrets from one employer to another.
The road ahead
Before collecting data on the road to analytics, enterprise IT managers may want to check in with their brethren in the fast-evolving sports industry, given their similarities in goals.