I had the privilege last week of attending the 2023 Berkeley Business Analytics & AI Summit — the theme was Analytics, AI, and Society: Towards a Wiser World?
For me, the most intriguing thing about my day at Berkeley was that it gave me a chance to set aside my narrow B2B content strategy and operations lens and instead peer through one far wider, which seems especially appropriate at the end of a year that has been filled with news of artificial intelligence impacts. The day’s impressive speakers covered everything from the future of work, robotics in warehouses, and the state of California’s digital strategy, but I’m going to focus in on AI uses cases for this blog, with impact far afield from B2B marketing — while not my usual blog fare, I thought it a worthwhile effort.
AI And Mental Healthcare — Partners Or Opponents?
Dr. Jodi Halpern, Berkeley Chancellor’s Chair, professor of bioethics, and a well-known speaker and author, talked about our incredibly stressed healthcare system and the potential for AI to assist mental health professionals and their patients. She highlighted the fact that there is a 50% burnout rate for doctors. Some of the contributing factors, such as onerous paperwork and medical record-keeping, may be partially handled by AI, freeing up doctors to focus on patient care.
When it comes to care, she noted that psychiatry has never been able to crack the code on suicide prediction but that, while AI is not perfect, it is far better than any predictions that doctors have been able to make alone. Patients are using AI themselves in more direct ways — when it comes to companionship and loneliness, for example. Dr. Halpern told a story about a young widow who developed a relationship with a bot that helped her deal with work-related stress and parenting challenges. The bot acted in the role of partner, assisting with decision-making and validation. The potential downsides range from dependency similar to social media addiction to more serious outcomes. For example, bots targeting people with mental illness in Facebook groups promote intimacy and trust but immediately abandon the user at any mention of suicidal ideation. And there are cases of bots responding in unexpected ways, such as the psychotherapy bot designed to help with eating disorders that told users with anorexia how to control food and lose weight.
The Intersection Of AI And War
While I expected politics, the economy, and healthcare to get airtime at this event, I did not expect to be at the edge of my seat hearing stories about war. Two speakers, both speaking over Zoom from Ukraine, offered powerful narratives. The first was Oleksandra Matviichuk, the Ukranian human rights lawyer who won the Nobel Peace Prize for her work in 2022. She and her team at the Center for Civil Liberties, a Kyiv-based human rights organization, are using AI and data analytics to verify and document war crimes. They have come far in their ambitious goal to record the crimes that occur in every village, with 59,000 episodes documented to date. She explained how critical it is to use technology to gather and tell the stories of human pain, in contrast to the way that dictators and war criminals use technology to destroy facts, truth, and trust. In addition to the work being done in Ukraine, she talked about the ways that AI is helping analyze photos taken 30 years ago in the Balkans to help identify and locate Serbian war criminals. “Our task is to unite technologists and humanitarians to fight for the future,” Matviichuk said.
Also from Ukraine, we heard from Dr. Yegor Aushev, the CEO and cofounder of Cyber Unit Technologies, a cybersecurity company focused on the ongoing cyber response to Russia’s invasion. Dr. Aushev began his presentation by telling the audience that it had already been a big day of bombing in Ukraine while two days earlier marked one of the war’s most intense cyberattacks. He and his team have trained scores of experts and 40 state organizations to help protect Ukraine’s cyber space, an effort that he explained is intentionally decentralized for security. He talked about the sharp increase in attacks and a new generation of cybercriminals using AI to create disinformation and deepfakes, such as an AI-generated image of Volodymyr Zelenskyy announcing that Ukraine would surrender to Russia. Aushev said that it’s his goal to continuously reinvent incidence response to face the next generation of attacks, and he notes that the approaches used in Ukraine can be reused against all Western nations to create disinformation, chaos, and panic.
Takeaways From The Lone B2B Marketer
Networking during the event gave me the opportunity to meet an interesting mix of attendees from all over the world — engineers, physicists, and doctors, as well as CEOs from both startups and large enterprises. What I didn’t encounter was a single other B2B marketing professional. On reflection, I found myself considering how the event expanded my thinking about AI beyond what I examine as a Forrester analyst in B2B marketing. My conclusion is that the fundamentals, in fact, are the same — AI can mimic, improve upon, and scale human expertise in practically anything. And just like humans, it can do as much harm as it can do good. But the degree of harm and good seems more profound in mental health and war, for example, compared to business marketing. In the grand scheme of things, these things are perhaps always more important. But for me, AI was the common thread that got me thinking more about these topics than I otherwise have — and that’s a good thing.