Articles

Sample Articles from Bob Wallace.

Read More about Bob.

(Source: Pixabay)

The National Football League’s long history of experiences – good and bad - with emerging technology products and services provides pure B2Bers with time and effort-saving learnings.

Long thought to be business-to-consumer organizations by their B2B counterparts, enterprise IT would be well advised to keep a close eye on sports entertainment entities that it turns out play in both spaces simultaneously. Why sports? Because they have years of hands-on experience with emerging technology products and services that most others can benefit from.

Whether it's Wi-Fi 6, 5G, analytics, facial recognition, or video capture and distribution, the National Football League (NFL) has been there and is doing that on either a league or team venue-wide basis, accumulating expertise in areas of interest to B2B firms.

Sports leagues are B2B and B2C entities

Like other sports leagues around the world, the NFL’s B2B product is live sports content (programming), with contests carried live by the broadcast TV networks (that originally took the sport to the masses) as well as Internet streaming services in the U.S. and abroad.

Yes, the NFL, like other sports leagues, also doubles as a B2C operation, with a key slice of its revenue initially started as ticket sales for fans in the stands and has evolved to monetizing followers through stadium-wide Wi-Fi to keep them engaged with all forms of social media and video sharing sites. Secondary revenue streams are concession and merchandise sales.

It was little surprise when the NFL branched out into console gaming with its wildly popular Madden partnership with EA Sports, which in turn has given rise to e-sports. Several team owners across U.S. pro sports have taken stakes in e-sports teams.

Another means to grow the leagues is through sports betting, which is widely available and helps sports attract additional viewers.

Technology Experiences

Facial recognition: security vs. privacy

Early testers and users of facial recognition technology: Super Bowl 35 was among the first entities in the U.S. to try it in 2001 in Tampa, less than a year before the world-changing terrorist attacks on 9/11. The league’s New Orleans Saints were among the first (2020) to use facial recognition systems to control access to the club’s practice facility. This made the Saints one of the first in the U.S. to employ it for a non-law enforcement application.

The Cleveland Browns announced early in the season that they are using a facial recognition option as an alternative to paper and mobile tickets for fans. “Fans can enroll and link their ticketing account to a selfie and come to the stadium and enter in with just their face,” explained Brandon Covert, VP of Information and Technology for the Cleveland Browns, in an interview with WKY-TV in Cleveland. Covert says this will make for a faster experience. “Fans don’t have to use their phone, don’t have to add tickets to their wallet, which is already a good process. This makes it just even quicker.”

Other venue owners are exploring the controversial tech as a means of ensuring that fans who have been banned from their stadiums for bad behavior don’t gain access to the facilities.

Wi-Fi: Then and now

Since pioneering stadium Wi-Fi use with the league's inaugural implementation nearly a decade ago in 2012, the New England Patriots, the football club, has upgraded the connectivity foundation multiple times and just recently worked with partner Extreme Networks in the last year for an installation of Wi-Fi-6 (aka 802.11 ax) to stay ahead of soaring use at the over 65,000-seat multi-purpose stadium.

The emergence of stadium Wi-Fi has been a hit with smartphone-toting fans and content creating teams, the league, and an array of team and stadium apps that engage fans and enable functions and processes. Fans have responded by using the stadium Wi-Fi networks and cellular services to share content with social media sites and to send images and videos to friends.

Analytics

Like a huge swath of enterprises, the NFL has used a variety of technologies, including Wi-Fi analytics packages, to gain vivid insights into fan activity and to help monetize the fans. An offering like Extremes has been providing teams and Super Bowl hosts granular data to identify usage trends and plan their underlying networks accordingly.

Video networking and distribution

Given that the NFL's B2B product is live sports programming with frills, the other Bs are the TV broadcast networks and, more recently, streaming services. The league has used technology to improve and expand both live and on-demand video content for many years. It created two coveted premium channels of its own – NFL Networks, which carries Thursday Night Football, and NFL RedZone, which is a live offering show scoring clips of all games played on Sundays.

A big slice of NFL stadiums has already been equipped with networked HD video cameras to provide 360-degree replays for fans in the stands, NFL websites, and highlights shows. These ring-based systems from Intel can require a mini data center connected to a GPON network to carry and process the huge amount of video used to create next-gen replays. Fiber and football go together.

Going global, through streaming

Enhancing the programming product and expanding its use are core to the NFL's B2B business strategy. That's why the league has formed streaming deals in Canada, China, Europe, and beyond in its efforts to take their live games (with ads and sponsorships) global. That has been aided by holding two regular season games a year in the U.K.

Works in progress

Some of the NFL’s tech initiatives have been met with headwinds and delays, most notably the league’s mobile ticketing mandate this year that eliminates the use of paper tickets and has angered fans because of long waits for entry because the Wi-Fi doesn’t provide a strong or reliable enough signal to scan the tickets in the cellphone’s Google Wallet.

Changing or replacing long-standing processes, whether for business or consumers (fans), is yet another area where the NFL and other pro football leagues have much in common with B2B enterprises. And enhancing current content products needs to work for everyone as the league serves two interconnected customer bases: businesses and consumers.

(Source: Pixabay)

Cutting Through the Fog Surrounding Private 5G Networks

About a year ago, Deere & Co. grabbed headlines for entering the Citizen’s Band Radio Service (CBRS) auction and buying licenses for spectrum. It plans to use the spectrum in private 5G networks in manufacturing facilities and to enable intelligent agriculture, both in rural areas that are not tops on operators’ rollout lists.

Since the auction, it seems, everyone in the fast-evolving wireless networking ecosystem, except Deere, has talked endlessly about their private 5G network plans: wireless operators, network integrators, equipment vendors, and spectrum license owners. And all will team with partners to deliver the next-gen networks.

Defining “private 5G network”

The pressing question is how enterprises define “private” 5G networks, this at a time when providers express interest in staying involved after the network is built by selling management, monitoring, security, multivendor systems integration, and oversight services to enterprises. These options are in sharp contrast to the time-tested DIY private net approach that Deere seems driven by.

Operators clearly want in on the private 5G network opportunity by offering managed network options, with companies offering network operations centers (NOC) from which to provide the above-mentioned services, as well as SLAs.

Enterprise-handled?

Do we assume that enterprises handle configuration and ongoing configuration management, performance management, and setting service levels? These foundational tasks must be high on the decision list for those seeking private networks.

Once enterprises determine the ROI of a private 5G network is better than that associated with using an operator’s public 5G service, which comes down to the application and geographic coverage issues, the options are aplenty.

Enterprise IT needs to decide exactly how much, if any, of a private 5G network it wants provided and managed. They must also decide where they stand on top issues and which assets they currently possess.

The checklist

Specs and standards: In an emerging ecosystem, it's always wise to consult with relevant industry and standards-making bodies to determine what has been defined, what's pending, and what important areas ate TBD. Whether it's industry associations and/or standards-crafting bodies, enterprises, and the entities they need to make private 5G networks are on the road to making these high-speed nets a reality.

Talent: Does your enterprise have staff with the knowledge and skills needed to handle and manage the private 5G network? If not, it’s available for hire or pay from independent contractors, in some cases those firms that helped U.S. operators with their current 5G public network rollouts.

Equipment partners: In the last year, network operators have announced partnerships with an array of wireless equipment vendors such as Ericsson, Nokia, and Siemens AG, to team to build private 5G networks, most of which are already working with enterprises themselves on pilot tests to gain crucial experience with the opportunity. These engagements are underway around the globe.

Security: Enterprises current on 5G are aware the technology in and of itself is secure. However, this make-or-break responsibility may be beyond the skillsets of some users' corporate IT and related departments. These firms can engage third-party entities that understand their business, applications, performance levels, and goals.

Spectrum: The CBRS auction enabled Deere, Chevron, several power companies, and manufacturers to buy licenses for 10 MHz slices of spectrum in the shared usage band. There are mechanisms – spectrum access systems - in place to prevent interference in the CBRS spectrum. Companies such as Federated Wireless provide spectrum to enterprises -often as part of network solutions - since the CBRS auction closed.

In August, Federated and Anterix, a holder of spectrum in the 900 MHz band, announced a partnership to provide a dual-band offering to enterprises in the utilities vertical industry. IT managers should expect more vertical-specific solutions providers - with valuable knowledge of focused applications and experience working in specific markets - to emerge.

Design, construction, and management: Options for private 5G nets are emerging. For example, Betacom, a longtime design and deployment partner to AT&T, T-Mobile, and Verizon, launched a fully managed private 5G network service in late May. The company handles network design, deployment, and management for the offering. Betacom explains enterprise customers retain ownership of the network and local control of their data. Management is provided by its NOC, which is hosted on Microsoft Azure.

Exploring the wireless WAN: For those enterprises looking for a managed, high-speed connectivity without owning all the components, Boise, IA-based Cradlepoint offers wireless WAN managed service that can include edge networking gear such as 5G routers. Cradlepoint was acquired by Ericsson in September 2020.

Time and timing

Given the early state of the private 5G network ecosystem, IT managers and their businesses aren't hard-pressed to make a quick decision. This can be a positive since many of the players in the ecosystem haven’t established a track record in the space per se.

Those operators, vendors, integrators, and service providers seeking entry into the ecosystem have made news with partnerships, initial services, and tests with household names in the U.S. and abroad; actual commercial deployments are largely in the early stages.

This leaves plenty of time for enterprises to evaluate all aspects of the private 5G network opportunity. The menu of options is under construction.

(Source: Pixabay)

Rising Upload Speeds Drive Symmetric Internet Access Rollout

With two large spending efforts focused on spreading broadband far and wide in the U.S., operator demand for symmetrical Internet access portends to feed American businesses a high fiber diet. Though Internet access has long been dominated by asymmetrical services with their far higher download than upload speeds, today’s work from home workforce is driving the need for symmetrical services. And fiber’s ability to deliver matching speeds has caught the attention of businesses, operators, and vendors worldwide.

Two drivers

Why? Two reasons. First, not unlike the country’s transportation infrastructure, water systems, and schools, much of the nation's networking infrastructure could benefit from modernization and expansion.

Researchers see record operator spending on 10G-Passive Optical Network infrastructure of almost all sizes. That signals operator interest in deploying and extending these fiber-based systems.

Secondly, the spread of remote work, school at home, telemedicine, and video surveillance apps requires higher upload speeds on Internet access links. Covid showed us how much we could do from home, so enterprises and service providers alike are looking to plan their infrastructure needs now with this combined growth in mind.

Future-proof rural broadband?

And while fixed wireless, satellite, coaxial cable, and fiber can all deliver broadband to businesses large and small in urban areas or underserved rural regions, carrier spending shows that fiber (the most expensive and demanding to deploy media) is a favorite for its ability to support symmetrical Internet access now and in the future.

Though massive FCC and Federal broadband deployment spending efforts don’t specify a media for operators, the need for links with matching download and upload speeds is growing.

Supporting higher upload speeds during Covid

According to the Q2 2020 OpenVault Broadband Insight (OVBI) report that some of the upstream changes were already being seen then. The firm provides data analytics and tracking service to operators. And Comcast released network usage stats in early spring, claiming upstream traffic on its network jumped 56% during 2020.

“Pandemic lockdowns changed the nature of upstream usage – in all likelihood, forever,” the more recent OpenVault report noted. “Continued high levels of remote work and a new embrace of videoconferencing for communication needs mean that consumption will pressure the limited upstream capacity of many broadband infrastructures. Moreover, the unique role of the upstream as an enabler of two-way communication makes unfettered performance essential.”

Another issue worthy of consideration for IT managers is that the uplink growth in data is growing at a higher rate than the downlink, according to AT&T SVP of Wireless and Access Technology Igal Elbaz. And conferencing (and collaboration) apps, including Zoom, require faster uplink speeds.

Those now driving the need for symmetrical Internet include:

  • Students
  • Businesses
  • Telecommuters
  • Content creators
  • Gamers

As a result of the usage from these groups, even those who aren’t fiber-first operators see the building need for symmetrical Internet access.

Billons for broadband

An injection of billions in funding for the deployment of high-speed networks beyond the reach of those today gives hope that rural broadband will not forever continue as an oxymoron or as a bridge too far for IT managers. The rural broadband issue has gotten renewed attention, with IT managers having to support a home-based workforce.

Launched last year, the Rural Digital Opportunity Fund (RDOF) is the FCC’s latest step to fund service provider deployment of broadband networks in rural America. The agency is directing $20.4 billion, over ten years, to finance up to gigabit speed services in unserved rural areas, connecting millions of American homes and businesses to digital infrastructure.

But for all the plans to extend high-speed broadband to the unserved and underserved, it’s difficult to predict when infrastructure that supports symmetrical Internet access will have a solid rollout timeline.

Progress vs. Congress

It appears that few are laughing at the joke that Congress is the opposite of progress when it comes to government-funded broadband rollout as the President's American Jobs Act, which contains $100 billion for the massive undertaking, has been slowed, and his current infrastructure plan’s broadband spending cut to $65 billion thanks to near gridlock in Washington, D.C.

The President believed the U.S. could bring affordable, reliable, high-speed broadband to every American through a historic investment of $100 billion. It will be interesting to see if the reworked bill passes, what can be done for the lesser amount.

The President’s original plan prioritized building “future proof” broadband infrastructure in unserved and underserved areas so that we finally reach 100 percent high-speed broadband coverage.

Media favoritism?

Further, an industry association is upset with the FCC’s RDOF plan, claiming the agency favors fiber over other broadband media.

The Wireless Internet Service Provider Association (WISPA) laid out in late June its concerns to the FCC about the agency's awarding of the RDOF awards. The association, which includes Fixed Wireless Access (FWA) providers, believes the FCC prefers fiber deployments and that the emphasis should be on reaching unserved regions ASAP as opposed to focusing on symmetrical broadband speeds.

 

(Source: Pixabay)

IT managers stand to gain faster, symmetric services as fiber network spending climbs.

A steep spike in carrier spending on optical broadband access equipment combined with federal incentives to deliver high-speed service to rural areas is changing the broadband access game and options for IT managers.

This is increasingly being accomplished by the deployment of 10Gigabit Passive Optical Networks (G-PON), which are point-to-multipoint access systems that use fiber optic media to flexibly provide advanced services.

Why? Because operators had first provided T-1 and Ethernet services for IT managers for SMBs seeking more robust corporate networks. But cablecos won many firms back by providing cheaper and faster DOCSIS 3.0 and 3.1-based services. But now, the spurned have an advantage with the deployment of, and upgrades to, 10G PON networks.

Unlike current DOCSIS services, 10G PON-based services are symmetric (the same speed download as upload), something cablecos cannot yet match. They are essential for applications such as surveillance/security/remote monitoring that stream video.

Targeted businesses

“In addition to helping them better compete with cable in the residential broadband market, telcos’ 10G-PON fiber deployments are also aimed at reclaiming share in the small-medium business (SMB) market, where cable operators have also made substantial gains over the last decade,” explained Jeff Heynen, Vice President, Broadband Access and Home Networking at Dell’Oro Group, which conducts market research spanning telecom, networks, and data center IT industries. “XGS-PON allows them to offer symmetric, multi-gigabit services that cable operators will have a difficult time competing with until they expand fiber networks of their own.” Telcos are also expected to put the full-court press on small enterprises.

Review time for IT managers

This emerging shift in dynamics requires enterprise IT managers to explore the capabilities and potential benefits of 10G PON services. Also suggested is ascertaining from carriers their timelines for upgrades of existing plant and deployment of the new GPON infrastructure/technology.

Much of the push for the extension of broadband to underserved and unserved expanses in America is powered by the FCC's Rural Deployment Opportunity Fund (RDOF) program, which is providing carriers funding to reach these areas with high-speed services. Proposed Federal and state broadband spending programs are fueling the broadband breakout.

IT managers should expect an increase in 10G PON deployments in the U.S., even before RODF projects begin to get rolling. While these deployments are primarily for residential applications, there are many operators who see using 10G PON as their connection to small and midsize businesses (SMB) and smaller enterprises.

Spending on 10G PON equipment skyrocketed 500% in the first quarter, according to a newly published report by Dell'Oro Group. Total global revenue for the Broadband Access equipment market increased to $3.3 B in the first quarter of 2021, up 18 percent year-over-year.

Upgrading to G-PON

  • Making the connection. IT managers looking to cash in on GPON services will need a standalone optical network terminal (ONT) unit, a piece of termination equipment. Another increasingly popular option is to insert a pluggable module in an existing switch at the user site.
  • Service availability. IT managers will need to make certain the business services they have been using at a site are offered over the GPON infrastructure. The same goes for the service-level agreements (SLA) they had with their business services. They should check that the agreements cover everything from on the ONT on the user premises to the OLT in the telco network.
  • Protection. IT managers must ascertain how traffic will be protected and secured as GPON is a shared technology. IT managers are used to having strong SLAs. That should not change. But it does require some knowledge and investigation by the IT manager to make sure that is the case.

Starting from scratch – broadband options for rural areas

For IT managers in areas lacking broadband access, the RDOF, and other expand-broadband initiatives, cannot be too sure how they will get high-speed services. The following are the primary options. Time and money are the deciding factors as carriers must balance the cost of providing broadband access along with the time it takes to reach anxious customers.

  • Fixed wireless. A carrier favorite in cases where an antenna and or tower is less expensive and quicker than trenching and laying fiber, especially in un- and under-served rural areas.
  • Satellite. With the re-emergence of low-earth orbit (LEO) satellites; Starlink from Elon Musk's SpaceX, Amazon, Telesat, and OneWeb, high-speed data can be beamed to locations equipped with a package of small on-premises gear.
  • Fiber. A known option that is the priciest of the three (and most time-consuming in new installations), telcos need the speeds possible to get ahead of DOCSIS 3.0 cable service alternatives and be ready when cablecos step speeds up with DOCSIS 4.0 services.

Fiber for the win?

Despite the competitive options, Heynen believes fiber has the inside track when it comes to rural areas that will be addressed as part of the FCC’s massive RDOF broadband breakthrough program.

"I do think the majority of RDOF money is going to be put towards the expansion of fiber projects because of the demand to provide gigabit speeds,'' Heynen began. "However, quite a number of major projects are going to be Fixed Wireless Access (FWA)-based, largely because they address areas where fiber deployments will simply be too costly."

Operators need to balance the speeds promised with the time and cost it will take to roll out full fiber networks. IT managers should also expect carriers that won funds from the RDOF to use a combination of both fiber and FWA.

(Source: Pixabay)

As results, performance factors, and challenges come to light, interest spreads for IT managers.

The Olympics in Japan will use facial recognition this summer. And Disney World announced in May it will test the technology for a month to cut wait times at the theme park as part of a more touchless experience.

But before getting serious about cloud-based or on-premises facial recognition systems, IT managers must first determine whether they will meet the desired performance levels required by the enterprise in its application.

Often lost in the heated debate over whether municipalities and law enforcement entities should be allowed to use facial recognition to identify suspects in crimes is a secondary question; How effective are the AI-driven, software-based systems?

And before we can dig into early returns on performance, we must acknowledge that there are far more verticals that could apply facial recognition – beyond enterprises with access control implementations, law enforcement organizations, and large multipurpose sports venues looking beyond identifying staff.

Vertical value

  • Retail. Identify shoplifters to reduce shrinkage.
  • Transportation. Airport security. Reduce waiting times. Check-in kiosks.
  • U.S. government. Deploy for border control uses.
  • Entertainment. Security for mass events at large venues such as the Olympics, etc.
  • Banking. Replace passwords with screen-based face scanning for online banking.
  • Healthcare. Deploy touchless access to secure lab gear, clean rooms, and more.
  • Media/marketing. Test audience reaction to movie trailers, characters in TV pilots, and optimal placement of TV promotions.
  • Consumer products. Measure visual emotions through facial analysis to pick candidates for new or enhanced offerings.
  • Travel. The US Department of Homeland Security predicts that facial recognition will be used on 97% of travelers by 2023.

Performance, historically

How far have facial recognition systems come? The Technology Policy Program at the Center for Strategic and International Studies (CSIS) produces non-partisan and non-proprietary research, took a look.

Facial recognition has improved dramatically in only a few years, according to CSIS blog author William Crumpler in April 2020. “As of April 2020, the best face identification algorithm has an error rate of just 0.08% compared to 4.1% for the leading algorithm in 2014, according to tests by the National Institute of Standards and Technology (NIST). As of 2018, NIST found that more than 30 algorithms had achieved accuracies surpassing the best performance achieved in 2014.”

But beyond algorithms, how are the systems faring in real-life deployments in the last few years? The answer, of course, is based on how its results are used, not necessarily measured.

Beyond the AI engine

A case in point is the New York Police Department (NYPD), which began a massive storage upgrade in 2009 to support the deployment of its facial recognition systems in parts of the Big Apple, such as the financial district. However, it is important to note that results that the entity released last year are used as one piece of a process for which its owner assesses the effectiveness.

The NYPD has long stated that any “matches” are used as only one part of an investigation. They are just leads and NOT enough to issue arrest warrants for. And before it becomes a useful lead for criminal investigators, a match from facial recognition is analyzed by a group that reviews each and weighs in on the quality – and hence viability - of the match.

Further review

“If possible, matches are identified, then trained Facial Identification Section investigators conduct a visual analysis to assess the reliability of a match and conduct a background check to compare available information about the possible match and relevant details of the investigation,” according to the NYPD FAQ.

Specifically, the NYPD uses facial recognition to compare images obtained during criminal investigations with lawfully possessed arrest photos. When used in combination with human analysis and additional investigation, facial recognition technology is a valuable tool in solving crimes and increasing public safety.

The NYPD measures the effectiveness of its facial recognition software on how it has done in terms of providing substantial leads in criminal cases. Last year, the NYPD released results for the 2019 calendar year. The organization explained that following; “(Its) Facial Identification Section received 9,850 requests for comparison and identified 2,510 possible matches, including possible matches in 68 murders, 66 rapes, 277 felony assaults, 386 robberies, and 525 grand larcenies. “The NYPD knows of no case in New York City in which a person was falsely arrested on the basis of a facial recognition match,” it added.

What affects facial recognition performance

Those in the know cite three variables that can impact the accuracy of a facial recognition system’s ability to make matches. They are the person’s pose, illumination, and expression (PIE) in the photo.

The mask factor

And perhaps not much of a concern before the global pandemic was the use of face masks. It is unclear what the long-term prospects for facemasks use after COVID-19 in the U.S and elsewhere are. But they have left their mark.

Consider this research and testing from NIST. The entity tested 89 commercial facial recognition algorithms. It used digitally applied masks and claimed to have found a 5-50% error rate in matching faces with them to photos of the same person.

Vendors are hard at work improving that error rate. BBC reported that a US Department of Homeland Security "controlled-scenario test" in January found one facial recognition system with a 96% success rate - although the results of those tested "varied greatly."

Environment dependent

And while it is a safe bet that we will see less mask-wearing, save for the winter months, some facial recognition systems will stay have to deal with what is known as facial occlusions. They include longtime items as hats, sunglasses, scarves, and heavy facial hair. These items are known to weaken the performance of facial recognition systems, especially when they are used in applications that use video surveillance.

IT managers with controlled access scenarios in mind for their systems may not need to be concerned with this variable in facial recognition accuracy.

The road ahead

Facial recognition systems are attracting broader interest in the form of tests and actual deployments, by users including Disney World and the 2021 Olympics. But deployments that include the municipal law enforcement entities face headwinds from those concerned with privacy and Big Brother concerns.

Source: Pixabay

Enterprise, law enforcement, and sports venue use expands, raising the need for IT infrastructure evaluation to support the resource-demanding systems.

Whether it is access control for employees, clearing fans to fill a stadium for mass events, or helping identify suspected criminals, users are forging ahead with the evaluation, testing, and implementation of facial recognition systems.

And while law enforcement entities have encountered headwinds in the form of privacy and Big Brother concerns, interested IT managers across industries all need to consider infrastructure requirements for facial recognition systems.

Your biometric options

For many security-conscious IT managers, key cards are giving way to the biometric family of approaches, which use speech, fingerprints, retina scans, and even breath to provide or prevent access to facilities and systems.

Another biometric approach, facial recognition, found its way from science fiction and spy movie material at the turn of the century to actual use by law enforcement, the military, and sports venues.

Early uses

Largely unbeknownst to the masses, facial recognition was tested/used at Super Bowl 35 in Tampa in 2001, just months before the 9/11 terrorist attacks on New York City and the Pentagon.

Since then, the New York Police Department has pushed to implement facial recognition but has run head-on into opposition by privacy advocates, the American Civil Liberties Union, and those fearing the big brother use of the advanced technology systems.

Seeing its value for limited access control applications, the NFL’s New Orleans Saints recently disclosed it implemented a facial recognition software system from AnyVision in December 2019. The team explained has been used to grant access to players, staff, and other workers secure, seamless access to practice facilities. Not long after, COVID-19 hit the U.S.

How matchmaking works

Facial recognition is used to identify or confirm a person's identity using their face. It can be employed to identify individuals in pictures, videos, or in real-time.

The systems use algorithms to spot distinctive details of a person's face, such as the location of their eyes, the shape of their nose, or the appearance of their chin. By combining these attributes, they can read the geometry of a face to match it to an image. Most facial recognition systems use 2D images than 3D because it is easier to match a 2D image with public photos or those in a database.

The system then converts the facial image to data. How does that work? The best explanation of this step comes to us from security software vendor Norton: “The face capture process transforms analog information (a face) into a set of digital information (data) based on the person's facial features. Your face's analysis is essentially turned into a mathematical formula. The numerical code is called a faceprint. In the same way that thumbprints are unique, each person has their own faceprint.”

This faceprint is then run through a database of faces seeking a match in what can be a huge repository of many millions of images, with the largest found in law enforcement and crime-fighting organizations.

If your image is matched with one in a facial recognition database, the system notifies authorities who decide if it is a close enough match to act on.

Infrastructure issues

The infrastructure required to support an effective facial recognition operation depends on how the technology is being used and the volume of images used to attempt matches. For example, if an enterprise is using facial recognition in an access control application to confirm the identity of employees, entities may be able to upgrade their current corporate IT infrastructure.

However, if a large, distributed enterprise is considering an implementation for all employees and workers, network and storage systems might need to be dedicated or augmented, along with computing power in the data center.

In either instance, high-speed network connections are a critical piece of the facial recognition system to ensure the low latency needed for high-volume scanning and matching in time-sensitive applications, such as access control.

Network needs

In a metropolitan law enforcement implementation, a network of surveillance cameras serves as the foundation for the facial recognition software, which pulls images stored in a large database with massive storage.

By contrast, a solution for an NFL venue could run on the arena's backbone network, with upgrades in storage in the facility's data center. In newer – multi-purpose - NFL stadiums such as Atlanta Mercedes-Benz Stadium’s IT infrastructure is a data center, which processes all data on game day, and backs that up to the vendor’s cloud that same day.

Originally used by telecom carriers to deliver triple-play service bundles to the home, a Gigabit Passive Optical Network (GPON) is used to connect a wide array of high-speed devices and systems to the data center. The Mercedes-Benz GPON uses 4,000 miles of fiber-optic cable. GPONs can handle the low latency needs of high-volume scanning and matching.

When it hosted Super Bowl 53 in February 2019, the facility’s backbone network supported:

  • 15,000 Ethernet ports
  • Wi-Fi equipment
  • 700 POS devices
  • A security system
  • Keycard accessed doors

Though the NFL had used facial recognition as early as 2001, there have been evaluations of the tech by individual teams. Some are operating small image volume applications to verify the identity of these employees and players entering stadium facilities.

From staff access to fan security

IT infrastructure revamps could be in the plans should sports teams expand their facial recognition implementations beyond players, staff, and suppliers, to includes the gameday fan crush of 65,000 to 90,000 people with all rushing to quickly pass through often jammed stadium entry points.

The same goes for older stadium venues that have added infrastructure over the years to support Wi-Fi 6, 5G service, and advanced video replay systems for fans and TV broadcasters.