While ChatGPT and other generative artificial intelligence (GenAI) have recently demonstrated how far AI has evolved, the reality is that the technology—generative and not*—has already been integrated into every aspect of our lives for a while now. Business ignore AI at their peril.
Businesses Cannot Ignore AI
AI technologies influence our media and social media habits and what we choose to stream online. They appear at nearly every juncture of our medical care. Facial recognition is an AI technology used to unlock our phones…and cross national borders in lieu of printed passports. It is the driving force behind autonomous vehicles (pun intended), voice assistants, smart houses, fraud prevention, spam filters, recommendation systems, personalized coupons and targeted ads, internet searches, GPS applications, job applicant filters, agricultural decisions, chat bots, data security, traffic management, logistics and supply chain management, and more. Think of any area in your life or business and it’s nearly a guarantee that AI is involved in some aspect of it. Generative AI just gave us a new way of comprehending how advanced it’s become, and the potential advancements that exist.
AI technology is also unique in how quickly and widely new iterations spread across societies and around the world, presenting an unusual challenge for individuals, businesses, and policymakers. It’s a complex, multifaceted, and rapidly changing technology. Control and ownership (and therefore action) sometimes rest with a single corporation, and other times with a million different users. The potential for doing good is vast. So is the potential for undermining the foundations on which we’ve built our communities, our businesses, and our countries.
The Bottom Line
AI is here, it’s staying, and it’s changing rapidly. If your business or organization hasn’t yet started to think about how AI–and AI policy–will affect you, now is the time to start. The opportunities and risks seem endless, but you should start asking yourself questions like:
Could AI improve your product (or your competitor’s) – or make it worse? Could it replace it entirely?
How should your employees use AI at work, and in what ways could their use of AI open you up to risk?
Does your organization benefit from more or less AI regulation?
Want to Read More?
For a deep dive into two experts’ view on how nation states and policymakers might govern and regulate AI, check out this piece in Foreign Affairs by Ian Bremmer and Mustafa Suleyman.
For more on one way in which AI may impact intellectual property and licensing—and by extension, journalism, media, creative endeavors like writing plays, movies, music, and books, and more—read about the New York Times’ conflict with ChatGPT and a recent U.S. District court decision regarding copyrighting AI-created inventions.
For some insight into public opinion on AI, Vox.com pulled together a nice summary of a recent study.
*Generative vs. Non-Generative AI: AI is a complex area of technological development, which can most succinctly be defined as creating systems and processes that can emulate human decision-making or behavior in some way. Non-generative, or traditional, AI is a process in which the technology does a specific task, following specific rules, and doesn’t create anything new. For example, when you play chess against a program or app, the software “knows” the rules of chess and can play with you. It cannot, however, invent new ways of playing the game. Traditional AI essentially takes in data and provides analysis.
Generative AI is considered the “next generation” in the evolution of AI (which is itself only a step on the way to self-learning AI, more on that below). Generative AI takes the rules given to it but instead of replicating it back in already existing ways, it creates NEW outputs. When an art AI generator creates a work of art, it is using the rules and design styles it was given, but creating a brand new output that didn’t exist before. It cannot invent new rules or styles, aka change the inputs it was given, but it can create original outputs using those rules. Generative AI takes in data, and creates something new with it.
The ultimate goal—or fear—of AI that can work completely independently, without human intervention. In that final iteration, the AI ‘learns’ as it analyzes both data inputs and its own outputs, correcting mistakes and creating new rules as it goes along. In this theoretical final outcome, machines will no longer need humans to guide the process, enabling them to conduct their assigned tasks entirely autonomously. The big question is—could that lead to a machine that decides it has no need for humans?
Comments