Showing posts sorted by date for query remote work. Sort by relevance Show all posts
Showing posts sorted by date for query remote work. Sort by relevance Show all posts

Saturday, May 11, 2024

Will Else Will Apple Do to Support AI?

Apple is negotiating to use ChatGPT features in Apple’s iOS 18, according to a Bloomberg report. That raises the question of what else Apple might eventually do in the artificial intelligence area, since that rumored deal seems centered on adding chatbot features. 


Other approaches would be needed to support many of the anticipated iPhone use cases, ranging from speech-to-text to translation to camera functions or health and fitness features, for example. 


Some of those approaches will likely lead to more custom chips. Apple's A-series chips already power iPhones, iPads, and Macs, so moves to add on-board AI processing capabilities would be logical. Neural engines or co-processors also are possible avenues. 


Core ML is Apple's framework for developers to build and integrate AI features into their apps, and likely will be another avenue of development.


That points up a major difference between consumer and enterprise or business use cases: consumer use cases will favor on-device implementations; enterprise will favor remote processing. In the area of personalization, for example, enterprise processing typically can rely on remote processing, as presently is the case for most AI-related personalization efforts in the advertising and content areas. 


Many consumer use cases for personalization will be more contextual, based on location (typically on a smartphone, in a mobile context) and might require more local processing. The other area of activity trackers might often require real-time processing, which will lean towards on-board processing as a consequence. 


Likewise, facial recognition and other security features might need to be processed on board, rather than remotely, as will other image processing and text-to-speech or speech-to-text use cases. Real-time translation also will tend to work best on board. 


Feature

Consumer AI Examples

Enterprise AI Examples

Personalization

Recommendation engines in shopping apps, news feeds curated based on user preferences.

Targeted marketing campaigns, personalized customer service interactions.

Efficiency & Automation

Smart assistants for scheduling appointments, voice commands for device control.

Robotic Process Automation (RPA) for repetitive tasks, predictive maintenance in manufacturing.

Data Analysis & Insights

Activity trackers that analyze fitness data, sleep monitoring apps that provide personalized recommendations.

Customer sentiment analysis from social media, predictive analytics for inventory management and demand forecasting.

Security & Fraud Detection

Facial recognition for unlocking phones, spam filtering in email applications.

Fraud detection in financial transactions, anomaly detection in network security.

Even looking only at generative AI, to support content creation use cases, enterprise applications will tend to work when supported by remote processing, as many of the activities are not extremely latency dependent. 


source: McKinsey, Seeking Alpha 


Saturday, March 30, 2024

Which Edge Will Dominate AI Processing?

Edge computing advantages generally are said to revolve around use cases requiring low-latency response, and the same is generally true for artificial intelligence processing as well. 


Some use cases requiring low-latency response will be best executed “on the device” rather than at a remote data center, and often on the device rather than at an “edge” data center. 


That might especially be true as some estimate consumer apps will represent as much as 70 percent of total generative artificial intelligence compute requirements. 


So does that mean we see graphics processor units on most smartphones? Probably not, even if GPU prices fall over time. We’ll likely see lots of accelerator chips, though, including more use of tensor processing units or neural processing units and application specific integrated circuits, for reasons of cost.  


The general principle is always that the cost of computing facilities increases, while efficiency decreases, as computing moves to the network edge. In other words, centralized computing tends to be the most efficient while computing at the edge--which necessarily involves huge numbers of processors--is necessarily more capital intensive. 


For most physical networks, as much as 80 percent of cost is at the network edges. 


Beyond content delivery, many have struggled to define the business model for edge computing, however. Either from an end user experience perspective or an edge computing supplier perspective. 


Sheer infrastructure cost remains an issue, as do compelling use cases. Beyond those issues, there arguably are standardization and interoperability issues similar to multi-cloud, complexity concerns and fragmented or sub-scale revenue opportunities. 


In many cases, “edge” use cases also make more sense for “on the device” processing, something we already see with image processing, speech-to-text and real-time language translation. 


To be sure, battery drain, processors and memory (and therefore cost) will be issues, initially. 


On-Device Use Case

Benefits

Considerations

Image Processing (Basic)

Privacy: Processes images locally without sending data to servers.  Offline Functionality: Works even without internet connection. - Low Latency: Real-time effects and filters.

Limited Model Complexity: Simpler tasks like noise reduction or basic filters work well on-device. - Battery Drain: Complex processing can drain battery life.

Voice Interface (Simple Commands)

Privacy: Voice data stays on device for sensitive commands. - Low Latency: Faster response for basic commands (e.g., smart home controls).

Limited Vocabulary and Understanding: On-device models may not handle complex requests. - Limited Customization: Pre-trained models offer less user personalization.

Language Translation (Simple Phrases)

Offline Functionality: Translates basic phrases even without internet. - Privacy: Sensitive conversations remain on device.

Limited Languages and Accuracy: Fewer languages and potentially lower accuracy compared to cloud-based models.  Storage Requirements: Larger models for complex languages might not fit on all devices.

Message Autocomplete

Privacy: Keeps message content on device.  Offline Functionality: Auto-completes even without internet.

Limited Context Understanding: Relying solely on local message history might limit accuracy. - Personalized Experience: On-device models may not adapt to individual writing styles as well.

Music Playlist Generation (Offline)

Offline Functionality: Creates playlists based on downloaded music library. - Privacy: No need to send music preferences to the cloud.

Limited Music Library Size: On-device storage limits playlist diversity. - Static Recommendations: Playlists may not adapt to changing user tastes as effectively.

Maps Features (Limited Functionality)

Offline Functionality: Access basic maps and navigation even without internet. - Privacy: No user location data sent to servers for basic features.

Limited Features: Offline functionality may lack real-time traffic updates or detailed points of interest. - Outdated Maps: Requires periodic updates downloaded to the device.


Remote processing (edge or remote) will tend to favor use cases including augmented reality; advanced image processing; personalized content recommendations or predictive maintenance. 


Latency requirements for these and other apps will tend to drive the need for edge processing.


Saturday, March 23, 2024

Can GenAI Replace Search?

Many seemingly believe a Gartner analyst opinion that AI queries will replace search, an obvious enough conclusion for those who use generative AI engines routinely.


Some might even agree that “by 2026, traditional search engine volume will drop 25 percent, with search marketing losing market share to AI chatbots and other virtual agents,” according to Alan Antin, Gartner analyst. 


Somewhat obviously, such a shift--at some scale--will potentially reshape organic and paid search as venues for marketing spend. But ask yourself: does GenAI fundamentally change the experience of “lean forward” media as compared to “lean back?” Does GenAI turn “lean forward” into “lean back” or vice versa?


It remains true that each successive wave of electronic media has shifted marketer spending, from radio to broadcast TV to cable TV to the internet, search engine marketing, social media advertising and mobile advertising. 


As virtual environments such as the metaverse are commercialized, advertising will migrate there as well.


To some extent, these shifts were zero sum games: what one emerging industry gained, the legacy media lost. The issue with generative or any other form of AI is the degree to which hybrid use models will emerge, where AI-assisted ad placement and formats develop as part of all existing venues. 


In other words, if AI becomes a core feature of search, social media, gaming, productivity apps, digital content venues and shopping, does GenAI necessarily disrupt, or might it disrupt and shift some amount of activity, but also reinforce existing venues and methods? 


In other words, does GenAI used “as an app” develop as a “new medium” or does it mostly remain a feature of existing media? 


To be sure, some might believe GenAI could revolutionize media by creating entirely new forms of storytelling, entertainment, and information dissemination. 


Others might see that as a remote possibility, with the more likely impact being the reshaping of all existing media. 


For example, GenAI might enable new forms of “interactive fiction,” where users experience narratives that adapt to user choices, generating personalized storylines and branching paths in real-time. Keep in mind that this also was expected for legacy media, by analysts considering the rise of interaction itself. Not so much has really changed, save for gaming use cases, though. “Interactive TV” has flopped, for example. 


AI-powered characters in games whose behavior is personalized for each user are more likely to happen, as is the application of GenAI to create metaverse and augmented reality experiences. But none of those are examples of media replacement. 


In other words, some of us would not agree that “search marketing” is exposed to replacement by use of GenAI. 


GenAI is most likely to modify existing media formats, making them more personalized, interactive, and immersive. Based on what happened with interactive TV (or storytelling in general), it seems unlikely that a brand new medium will emerge from GenAI. 


To the extent that GenAI becomes a core feature of search, social media and nearly all other experiences and apps, GenAI might not actually be a “threat” to search. 


Think of the established categories of “lean forward” experiences such as interacting with a PC or smartphone to the “lean back” experience of video, television, movies or music. GenAI as a feature will be used mostly to create those experiences, but might not change the fundamental “lean forward” experience of work, learning, search or shopping and gaming.


Likewise, the “lean back” nature of entertainment might not be desirable for movies, video, TV or musical experiences and storytelling in general. 


The way we consume media can be categorized into two main types: lean forward and lean back.


Lean forward media require active engagement and focus. Examples include:

Playing video games

Browsing the web or using search

Using social media platforms

Reading e-books

Working on a computer

Mental State: Engaged and alert, requiring concentration.

Physical Posture: Can vary, but often involves sitting at a desk or holding a device.


Lean back media requires minimal user effort and is largely a passive experience:

Watching television

Listening to music

Watching movies

Reading a physical book

Attending a concert or play

Mental State: Relaxed and receptive, focused on enjoying the content.

Physical Posture: Often involves sitting or reclining comfortably.


If GenAI were not tightly integrated with all “lean forward” experiences, one might have a better argument for replacement. But that is unlikely to be the case. Likewise, it is not clear that GenAI changes the fundamental “lean back” experience of storytelling in the form of books, TV, video, movies, music, concerts and plays.


Even if one assumes both search and GenAI chatbots are forms of "lean forward" experience, it is very hard to see a permanent stand-alone role, as GenAI already is rapidly being incorporated into all enterprise and consumer software and experiences.


So GenAI becomes a feature of search; not a replacement.


Tuesday, March 12, 2024

GenAI Consumes Lots of Energy, But What is Net Impact?

Much has been made of a recent study suggesting ChatGPT operations consume prodigious amounts of electricity, as exemplified by the claim that ChatGPT operations consume 17,000 times more energy than a typical household.  


No question, cloud computing requires remote data centers, and data centers are big consumers of energy. In the United States, data centers now account for about four per cent of electricity consumption, and that figure is expected to climb to six per cent by 2026, according to reporting by The New Yorker


But that is not the whole story. Data centers, apps and cloud computing are used to design, manufacture and use all sorts of products that might also decrease energy consumption. Some would argue, for example, that there is a net energy reduction when people use ridesharing instead of driving their personal vehicles. 


Study Title

Location

Key Findings

Life Cycle Energy Consumption of Ride-hailing Services: A Case Study of Taxi and Ride-Hailing Trips in California (2020)

California, USA

- Ridesharing resulted in 11-23% lower energy consumption compared to private vehicles, primarily due to higher vehicle occupancy.

The Energy and Environmental Impacts of Shared Autonomous Vehicles Under Different Pricing Strategies (2023)

N/A (Hypothetical Scenario)

- Shared Autonomous Vehicles (SAVs) with high occupancy rates have the potential for significant energy savings compared to private vehicles.

Future Transportation: The Social, Economic, and Environmental Impacts of Ridesourcing Services: A Literature Review (2022)

N/A (Literature Review)

- Ridesharing can potentially reduce vehicle miles traveled (VMT) compared to private vehicles, leading to lower energy consumption. - However, concerns exist regarding: * Increased empty miles driven by rideshare vehicles searching for passengers. * Potential substitution of public transportation trips with ridesharing, negating some environmental benefits.

Life-Cycle Energy Assessment of Personal Mobility in China (2020)

China

Ridesharing with three passengers can reduce energy consumption.

The Energy and Environmental Impacts of Shared Autonomous Vehicles (2021)

N/A

Shared autonomous vehicles can reduce energy consumption. 

Empty Urban Mobility: Exploring the Energy Efficiency of Ridesharing and Microtransit (2019)

Europe

High-occupancy ridesharing reduces energy consumption, compared to use of private vehicles, but we also must account for energy consumed when not transporting passengers. 


So far as I can determine, nobody has really tried to model the net energy impact of generative artificial intelligence, data centers or cloud computing, where the energy footprint of GenAI, data centers or cloud computing is compared with the possible net reductions throughout an economy if the app outputs are used to reduce energy consumption in products using cloud computing, data center and GenAI  outputs. 


Study Title

Key Findings

Green Cloud? An Empirical Analysis of Cloud Computing and Energy Efficiency (2020)

Cloud computing adoption improves user-side energy efficiency, particularly after 2006. - SaaS (Software-as-a-Service) contributes most significantly to both electric and non-electric energy savings. IaaS (Infrastructure-as-a-Service) primarily benefits industries with high internal IT hardware usage.

The Internet: Explaining ICT Service Demand in Light of Cloud Computing Technologies (2015)

Cloud computing can lead to increased energy consumption in data centers. Potential for energy savings in other sectors due to: * Reduced need for personal computing devices.  Improved resource utilization and consolidation.

Decarbonizing the Cloud: How Cloud Computing Can Enable a Sustainable Future (McKinsey & Company, 2020)

Cloud adoption powered by renewables can significantly reduce emissions compared to on-premise IT infrastructure. Cloud enables the development of various sustainability solutions (smart grids, remote work).

Cloud Computing: Lowering the Carbon Footprint of Manufacturing SMEs? (2013)

Case studies of manufacturing SMEs shifting to cloud-based solutions.


But some related research suggests ways of looking at net energy footprint. 


Industry

Cloud-based Solution

Potential Fuel Savings

Source

Trucking

Route optimization with real-time traffic data

Up to 10%

DoT

Railroad

Predictive maintenance for locomotives

5-10%

Wabtec

Shipping

Optimized container loading and route planning

5-15%

Massey Ferguson


The point is that “net” impact is what we are after.


AI Wiill Indeed Wreck Havoc in Some Industries

Creative workers are right to worry about the impact of artificial intelligence on jobs within the industry, just as creative workers were r...