Generative AI and Edge Computing can work synergistically to deliver personalized, intelligent insights at scale. Processing massive volumes of data at the edge, closer to the point of generation or consumption, saves bandwidth costs by avoiding the need for data to be transferred to the cloud for analysis. Additionally, the insights at the edge are always available, even if there is no or limited connectivity to the cloud. Organizations are investing in AI tools and technologies not only to enhance value for their customers but also to reduce operating costs by augmenting the productivity and efficiency of developers.

Originally published on vmblog.com.

The only way to scale AI applications will be to distribute it, with the help of edge computing

I predict that the convergence of Edge and Cloud AI is the way to deliver AI at scale, with the cloud and edge offloading computational tasks to the other side as needed. For instance, the edge can handle model inferences while the cloud may handle model training. Or, the edge may offload queries to the cloud based on heuristics such as the length of a prompt.

When it comes to a successful AI strategy, it would be cost prohibitive if all of the AI computing was only running in the cloud. Coupled with energy and power requirements, cloud data center egress charges, the operating costs of delivering AI computing can be very high. Companies need to consider an edge computing strategy – in tandem with the cloud – to enable low-latency, real-time, personalized AI predictions in a cost effective way with a lower carbon footprint and without compromising on data privacy and sovereignty. 

The success of Edge AI will depend on advancements in lightweight AI models

To make Edge AI a viable option, AI models need to be lightweight and capable of running in resource constrained embedded devices and edge servers while continuing to deliver results at acceptable levels of accuracy. 

Models need to strike the right balance — meaning, models must be small and less computationally intensive and energy efficient so they can run efficiently at the edge while also delivering accurate results. While a lot of progress has been made in model compression, I predict that there will be continued innovation in this space, which when coupled with advancements in processors will make Edge AI ubiquitous.

AI tools will separate the good developers from the exceptional ones, playing an integral role in developer productivity

Good developers will lean on AI tools to lighten their workload. Exceptional developers will use AI tools and assistants to boost productivity on repetitive, mundane tasks so they can focus more on being creative, tackling the hard problems and to handle the higher value tasks that promote innovation.

While I caution against developers getting too reliant on AI tools and leaning on productivity tools to do all or most of their work for them, the reality is that AI will continue to play a critical role in developer productivity. Developers should understand the limitations of these tools and exercise good judgment when using these tools because overuse can stifle innovation and critical thinking. Moreover, the results may not be the most accurate, up-to-date or the most efficient way to solve the problem. 

Edge computing, lightweight AI models and a focus on empowering developers will move the needle on AI in 2024

Lightweight AI models coupled with hardware innovations at the edge, and the convergence of Edge and Cloud AI, will be instrumental to a successful AI strategy in 2024 and beyond. Additionally, engineering organizations will continue to look for ways to leverage AI tools and assistants to accelerate developer productivity while fostering creativity and innovation. In the coming year, I look forward to seeing how organizations expand their use of generative AI to further evolve their operations and build breakthrough solutions for their customers by shifting workloads to the edge. 

Learn more

What do you think about these trend predictions? Follow along with our various AI developments through this blog:

Author

Posted by Priya Rajagopal, Director, Product Management

Priya Rajagopal is a Director of Product Management at Couchbase responsible for developer platforms for the cloud and the edge. She has been professionally developing software for over 20 years in several technical and product leadership positions, with 10+ years focused on mobile technologies. As a TISPAN IPTV standards delegate, she was a key contributor to the IPTV standards specifications. She has 22 patents in the areas of networking and platform security.

Leave a reply