How Edge Computing Could Unlock Mobile AI's Potential
Mark Gamble is a Product and Solutions Marketing Director at Couchbase, a software development company based in Santa Clara, which delivers Capella, a cloud database platform which supports developers to build apps. Mark has two decades of experience as a product marketer, having previously worked at SAP, OpenText and Winmore.
Here, he tells us about the innovative techniques which can reduce the computational burden on mobile devices and how true mobile AI can extend beyond smartphones to IoT devices.
Extending mobile AI beyond smartphones
As enterprises explore the potential of GenAI and large language models, Mark believes that true mobile AI can extend beyond smartphones to IoT devices.
“Mobile AI can extend beyond smartphones to IoT devices through an ‘edge AI’ architecture, where applications and data processing run locally, including directly on the devices themselves. This approach involves leveraging lightweight, embeddable AI models that are optimised for resource-constrained environments, like single board computers in sensors or household smart appliances, as well as a mobile database with vector search that runs on-device and synchronises data to the cloud,” he says.
This allows an organisation to balance on-device processing with cloud-based support, bringing the scale to handle the immense amounts of data required for AI and the immediacy to harness it most effectively.
“This strategy can make everyday objects smarter while managing limited resources and reduces data transmission and latency,” he adds.
So how can this create a truly intelligent operational system? According to Mark, by extending AI and data processing to IoT devices, which creates intelligent operational systems by allowing individual devices to operate more autonomously, without depending on a central cloud control point, enabling real-time processing and decision-making.
“This allows for faster responses and improved uptime, even without constant cloud connectivity,” says Mark. “Combining this with data synchronisation creates a distributed intelligence that can operate and share data in isolation, creating a network of smart devices that can adapt to different situations and collaborate effectively.”
The future of mobile AI processing
Mark is confident that edge computing is key to fully unlocking mobile AI's potential.
“By processing data and running AI models directly on devices, edge computing reduces latency, improves privacy and supports AI in areas with limited or no connectivity,” he says. “This ensures more responsive and reliable AI applications across mobile and IoT devices. Edge computing facilitates personalised AI experiences through local data processing, enhances security by keeping sensitive information on-device and enables AI-driven applications in remote environments where cloud may not be an option.”
With a cloud-to-edge database like Couchbase, users can create apps that take the best advantage of resource constrained devices by processing AI where it's most appropriate for the use case.
“You might have a mobile app that processes basic input, such as simple prompts, audio or images, through a lightweight AI model on-device,” concludes Mark. “But for more complex inputs like highly detailed prompts the app might call large cloud-based models for the most accurate response. The capability for data processing and vector search on-device and in the cloud, along with automatic data synchronisation, is what enables this optionality that allows for the most efficient use of edge devices.”
Make sure you check out the latest edition of Mobile Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
Mobile Magazine is a BizClik brand
- Nokia Leads Telco API Rankings as Cloud Competition Heats UpMobile Operators
- Dell CTO: AI at the Edge Will Transform UK TelecomsTelecommunications
- Building Full Fibre: Enhancing Openreach's Net Zero JourneySustainability
- Nokia: Tackling Telecom Hardware Emissions via AI StrategyTechnology & AI