Wednesday, January 8, 2025

AI--Among Other Things--Is Changing the Person-Machine Interface

 Person-Machine interfaces have changed over the past 60 years, it is clear enough, and generative artificial intelligence seems to  be prepping the way for another evolution.

Among the changes AI will bring:


To be sure, many of those trends and interface directions are already present in some form. But AI should enable interactions that are “easier” because they are “smarter.” An analogy is the shift from command line interfaces to graphical user interfaces; the browser; the touch interface or voice interactions. 


Generative AI adds a shift to natural language interactions with computing devices and apps. 


This should enable more people to act functionally as coders, without the requirement for extensive coding knowledge. So low-code or no-code tools should be possible, allowing non-technical users to customize and build their own features. 


Also, all sorts of formerly-arduous tasks (writing code, conducting research) should be automated, essentially creating capabilities that might formerly required staffs of people and fair amounts of work input. 


One (quite old at this point) example is batch processing. My first coding was done on a time-shared mainframe (you had to sign up for a specific time to use the machine). 


We’d submit our tasks or jobs as a stack or deck of punch cards. Then we’d wait for the results, picking up a printed output of results at some later time. It was cumbersome, time-consuming, error-prone, limited and unfriendly. 


By way of illustration, an IBM System/370 had a central processing unit operating at up to 5 MHz, with 8 MB of memory. It could handle thousands of “floating point” additions per second. 


An iPhone 16 Pro CPU operates at GHz speeds. The iPhone has at least 1000 times more random access memory than the top mainframe models of the 1970s.


The iPhone's Neural Engine can perform 35 trillion operations per second, a capability that didn't exist in 1970s mainframes.


The iPhone achieves this performance in a handheld device, while 1970s mainframes required large rooms and extensive cooling systems. 


The iPhone also can handle a wide range of tasks, from complex computations to graphics rendering, surpassing the specialized functions of early mainframes, with text and numeric output only. Keep in mind that with batch processing, there was zero visual feedback on any specific job or task. Nor was there direct interface with the machine (everything was mediated by the computing center staff). 


Since that time we have added the ability to directly use a terminal or screen and give programs instructions in a direct way (command line). Then we got graphical user interfaces, which gave us windows, icons, menus, and pointers, as well as the mouse for navigation (we used “up” and “down,” “left” and “right” arrow keys before GUIs. It was way more intuitive. 


More recently, the web browser has become a major interface, as has “touch” and “speech” and “gestures.”


The point is that user interfaces matter, and that the history of user interfaces is a story of increasing accessibility and intuitiveness. AI will provide the next advance in interfaces.


No comments:

AI--Among Other Things--Is Changing the Person-Machine Interface

  Person-Machine interfaces have changed over the past 60 years, it is clear enough, and generative artificial intelligence seems to  be pr...