Image: Tomasz/Adobe Stock

Edge is complex. Once we overcome the mind-boggling enormity and earth-shattering reality of understanding this basic claim, perhaps we can begin to build frameworks, architectures, and services around the task at hand. Last years Edge Status Report from The Linux Foundation put it succinctly: “Edge, with all its complexities, has become a fast-growing, strong and demanding industry in its own right.”

Red Hat appears to have taken a stoic appreciation of the complex edge management role that lies ahead for all enterprises now moving their IT stacks to deploy in this space. The company says it sees edge computing as an opportunity to “extend the open hybrid cloud” to all the data sources and end users that populate our planet.

By specifying endpoints as different as those found on the International Space Station and your local neighborhood pharmacy, Red Hat now aims to clarify and validate the parts of its own platform that address specific endpoint challenges.

On the bleeding edge of the edge

The mission is, although the edge and the cloud are closely related, we need to enable computing solutions outside the data center, at the bleeding edge of the frontier.

“Organizations are looking at edge computing as a way to optimize performance, cost and efficiency to support a variety of use cases in industries ranging from smart city infrastructure, patient monitoring, gaming and everything in between,” said Erica Langhi, Senior Architect of Red Hat decisions.

SEE: Don’t Curb Your Enthusiasm: Trends and Challenges in Peripheral Computing (TechRepublic)

Clearly, the concept of edge computing represents a new way of looking at where and how information is accessed and processed to build faster, more reliable and secure applications. Langhi advises that while many software application developers may be familiar with the concept of decentralization in the broader network sense of the term, there are two key considerations for the end developer to focus on.

“The first one has to do with data consistency,” Langhi said. “The more scattered the end data is, the more consistent it needs to be. If multiple users try to access or modify the same data at the same time, everything must be synchronized. Edge developers should consider messaging and data streaming capabilities as a powerful foundation for maintaining data consistency to build native data transfer, data aggregation, and integrated services for edge applications.”

Edge’s meager requirements

This need to emphasize the intricacies of edge environments stems from the fact that this is a different computation—there is no customer offering their “requirement specification” document and user interface preferences—at this level we are dealing with more detailed machine-level technology constructs .

The second key consideration for end developers is dealing with security and management.

“Working on a large data area means that the attack surface is now extended beyond the data center with data at rest and in motion,” explained Langhi. “Edge developers can adopt encryption techniques to help protect data in these scenarios. With increased network complexity as thousands of sensors or devices are connected, end developers must strive to implement automated, consistent, scalable and policy-driven network configurations to maintain security.”

Finally, she says, by choosing an immutable operating system, developers can impose a reduced attack surface, thereby helping organizations deal with security threats effectively.

But what really changes the game from traditional software development to edge infrastructures for developers is the variety of target devices and their integrity. That’s the view of Markus Eisele in his role as a developer strategist at Red Hat.

“While developers typically think about frameworks and architects think about APIs and how to tie everything back together, a distributed system that has compute units at the edge requires a different approach,” Eisele said.

What is needed is a complete and secure supply chain. That starts with integrated development environments—Eisele and team point to Red Hat OpenShift Dev Spaces, a zero-configuration development environment that uses Kubernetes and containers—that are hosted on secure infrastructures to help developers build binaries for various targets. platforms and computing units.

Base binaries

“Ideally, the automation that works here goes well beyond a successful build, on to tested and signed binaries of verified base images,” Eisele said. “These scenarios can become very challenging from a management perspective, but should be repeatable and minimally invasive to the inner and outer life cycles for developers. While there isn’t much change at first glance, there’s even less room for error. Especially when we think about the security of the generated artifacts and how it all comes together while still allowing developers to be productive.”

Eisele’s inner and outer contour reference pays homage to the complexity at work here. The inner loop is a single developer workflow where code can be tested and changed quickly. The outer loop is the point at which code is committed to a version control system or part of a software pipeline closer to the point of production deployment. For further clarification, we can also remind ourselves that the above notion of software artifacts refers to the entire set of elements that a developer can use and/or create to build code. So this can include documentation and annotation notes, data models, databases, other forms of reference material, and the source code itself.

SEE: Hiring Kit: Back-end Developer (TechRepublic Premium)

What we know for sure is that unlike data centers and the cloud, which have been around for decades, edge architectures are still evolving at a more exponentially charged rate.

Target Parry

“The design decisions that architects and developers make today will have a lasting impact on future capabilities,” said Ishu Verma, technical evangelist for edge computing at Red Hat. “Some edge requirements are unique to each industry, but it’s important that design solutions are not tailor-made for the edge, as this can limit an organization’s future agility and ability to scale.”

Red Hat’s edge-focused engineers insist that a better approach involves building solutions that can run on any infrastructure—cloud, on-premise, and edge—as well as across industries. The consensus here seems to be strongly gravitating toward choosing technologies like containers, Kubernetes, and lightweight application services that can help establish future-ready agility.

“Common elements of end applications across multiple use cases include modularity, segregation, and immutability, which makes containers relevant,” Verma. “Applications will need to be deployed at many different edge levels, each with their own unique resource characteristics. Combined with microservices, containers representing instances of functions can be scaled up or down based on underlying resources or conditions to meet the needs of customers at the edge.”

Edge, but on a scale

Then all these challenges are before us. But while the message is don’t panic, the task becomes more difficult if we need to engineer software applications for edge environments that are capable of securely scaling. The limit at scale comes with the challenge of managing thousands of endpoints located in many different locations.

“Interoperability is key to the edge at scale, as the same application must be able to run everywhere without being redesigned to fit a framework required by an infrastructure or cloud provider,” said Salim Khodri, an edge specialist EMEA markets in Red Hat.

Khodri makes his comments in line with the fact that developers will want to know how they can harness the benefits without changing the way they develop, deploy and support applications. That is, they want to understand how they can accelerate edge computing deployments and combat the complexity of distributed deployments by making the edge programming experience as consistent as possible using their existing skills.

“Consistent tools and modern application development best practices, including integration of CI/CD pipelines, open APIs, and native Kubernetes tools, can help address these challenges,” explained Codery. “This is to ensure the portability and interoperability capabilities of end applications in a multi-vendor environment, along with the processes and tools to manage the application lifecycle at the distributed end.”

It would be difficult to list the key tips here on one hand. Two would be challenging and may require the use of several toes as well. The keywords are perhaps open systems, containers and microservices, configuration, automation and of course data.

The decentralized advantage may start from the DNA of the data center and consistently maintain its intimate relationship with the backbone of the IT stack in the cloud, but it is essentially a disjointed relationship.

An intimate but disconnected pairing, Red Hat on edge complexity

Previous articleThe Tokyo version of ServiceNow is focused on supply chain, HR automation
Next articleFederal agencies ignore most of the building safety concerns noted by DHS