For most of the 10 years she has operated
TIB Creative Studio LLC in Providence, graphic designer Theresa Barzyk has relied on a low-tech questionnaire given to new clients to spark project ideas.
But lately, she’s used another method, too. When Barzyk finds herself stuck creatively, she’s found that consulting with ChatGPT can nudge the process of generating fresh concepts back on track.
The ideas spit out by the web-based chatbot are rarely impressive, Barzyk says. “But if it comes up with anything creative, I’ll work off of that,” she said. And whether it’s for idea generation or image enhancement, Barzyk now uses AI on a near-daily basis.
Other small-business owners are sure to follow, as recent advances in artificial intelligence – and its accessibility and affordability – have fueled a meteoric rise in its use, or at least conversations and research about its use, and companies of all sizes try to figure out if and how the technology can give them an edge.
In the process, the technology has spurred diverse reactions, ranging from fear to excitement to uncertainty.
For some small-business owners and their employees, AI is a dark cloud that’s no longer just sitting on the horizon. They worry clients
and employers will settle for AI-generated work if it means saving money, even if the results are subpar or ethically dubious. Some workers who have spent their careers fine-tuning skills fear losing their jobs entirely to a technology that’s been touted in many cases as having the ability to do the same task in seconds and improve how to do it.
Others see an opportunity unlike any other in modern times. To those people, using AI to perform tasks doesn’t have to mean increased stress. Instead, AI could present an opportunity to free workers to focus on creative thought and innovation that require a human touch.
And for many, such as Barzyk, it’s something in between: at times useful, but also limited in accuracy and quality. Then there are those who aren’t sure what to think.
The technology’s limitations are significant, says Suresh Venkatasubramanian, a professor of data science and computer science at
Brown University whose work has included national efforts to uphold ethics while using AI.
While the idea that AI and machine learning – computer systems that learn and adapt without following instructions by using algorithms and models – can easily replace most job functions isn’t accurate, Venkatasubramanian says, concerns that people will lose their jobs due to the technology aren’t unfounded, either.
“There’s a tendency that keeps happening where businesses say, ‘We can automate all of these systems, we don’t need to hire all of these people,’ ” Venkatasubramanian said. “They fire a bunch of people, then realize [the AI systems] don’t work as well as they thought they did. They don’t deliver on their promises, and at that point, people have already lost their jobs.
“I think there’s a tension between wanting to ride what looks like a big wave but not fully appreciating the degree to which there is a mismatch between the hype around AI and what it can actually do,” Venkatasubramanian said, noting that the technology already has a concerning track record of perpetuating discrimination against already-marginalized groups and producing inaccurate information.
Nevertheless, few dispute that businesses will feel the impact of AI advancements. Whether the technology will set businesses and workers up to fail or thrive depends largely on recognizing the technology’s actual capabilities, and where they need regulations, observers say.
DOUBLE-EDGED SWORD
AI has skyrocketed to the forefront of public awareness since the tech developer OpenAI launched its chatbot, ChatGPT, in late 2022. That launch caused a sensation in part because the technology crossed a crucial threshold in accuracy and its human-like capabilities to perform mental tasks, and because it’s free, easily accessible and made it easy for the average person to try.
That development has caused ripples of curiosity and anxiety in the Rhode Island business community.
Diane Fournaris, the director of the
Rhode Island Small Business Development Center, says the staff has been fielding multiple calls each week about AI. Many businesses that call with worries about AI are content-producing services, Fournaris says.
“Blogs, websites, that’s where the most concern is,” she said. “Our clients are asking us if this is going to be as good as having a person do it, and how good is the content, which leads us to believe it could have some kind of impact on those businesses.”
It’s still too early to predict the technology’s large-scale effect on Rhode Island’s business community, Fournaris says, but it seems to be “a double-edged sword” based on early questions and comments.
Chris Parisi doesn’t see it as double-edged.
Parisi, president of the Providence digital marketing agency
Trailblaze Marketing, says if local companies embrace AI, Rhode Island has the chance to lead a massive shift in business and the workforce. In fact, the former chairman of the
Rhode Island Small Business Coalition has launched a podcast exploring the impact of artificial intelligence on Rhode Island businesses.
“The ultimate vision is to make Rhode Island a leader in the AI revolution,” Parisi said, just as “we were once a leader in the American Industrial Revolution.”
In his own business, Parisi and his employees use AI “in every service and process,” he said, from automated video editing to transcribing, creating social media clips and writing notes.
At the same time, Parisi said, he’s hired a new employee and expects to continue expanding his staffing due to “the demand for AI … becoming more important for our services and for our clients” as more businesses adopt it.
While Parisi empathizes with business owners and workers who worry about AI, he says it would be a mistake to not capitalize on it.
Ideally, Parisi said, AI will “enhance you, not replace you,” and provide a big boost to small businesses without the time or finances to support more hiring or other resources.
He’s taken his cause to the Statehouse, discussing with Gov. Daniel J. McKee the possibility of forming an “AI task force” to start working on the challenges that the evolving technology creates.
“There are concerns with AI, and rightfully so,” he said. “That’s why we need to create a task force to understand, and so that businesses understand the ethics when it comes to AI, and so we can work with federal regulations … and lead by example.”
For his part, McKee appears to be interested in discussing artificial intelligence and its impact on the business community.
“As artificial intelligence becomes more prevalent in our lives, it’s important that our state continues to stay on top of the advancing innovations,” McKee’s spokesperson Olivia Darocha said in a statement. “We look forward to exploring [it] further with stakeholders and business owners, including Chris Parisi, over the next few months.”
State Rep. Lauren H. Carson didn’t want to wait.
Carson said she saw enough potential for misuse of AI and machine learning that she sponsored two pieces of legislation in the last General Assembly session intended to get lawmakers thinking about it. “We as a state have an obligation … to really study this,” she said.
One bill would have authorized the
R.I. Office of the Attorney General to adopt and enforce rules and regulations on generative AI models such as ChatGPT “in order to protect the public’s safety, privacy and intellectual property rights.” Another bill would have required mental health providers to disclose any use of AI to clients.
Both bills were based on legislation in Massachusetts, but neither measure made it out of committee. And a Senate bill that would have established a commission to study the use of AI in the decision-making process of state government suffered the same fate, never making it to the Senate floor for a vote.
In June, the House did pass a resolution asking that the
R.I. Department of Administration and the Division of Information Technology review the use of “automated decision systems” in state government and develop recommendations for regulatory or legislative action. A report is due by Sept. 29.
Carson suspects there will be a raft of legislation next session proposing controls on AI.
In the meantime, Carson says she has had some success in her own experimentation with ChatGPT.
“I was playing with AI about two weeks ago and got it to draft a piece of legislation for me that was actually pretty good,” Carson said. “So it’s out there, there’s no doubt about that.”
LEGAL QUESTIONS
You don’t need to tell Nicole Benjamin and other lawyers like her that AI is out there.
Benjamin, a shareholder at
Adler Pollock & Sheehan P.C. and president of the
Rhode Island Bar Association, says that AI’s capabilities, such as performing sophisticated writing and research tasks, can allow lawyers to focus on other parts of their jobs.
“I’m hopeful if lawyers are using more artificial intelligence in their practices and becoming more efficient with the work they’re doing, that will free up additional time for them to take on more pro bono or low bono services,” Benjamin said.
But the technology has also shown its weaknesses. In June, two lawyers at a New York City firm were fined $5,000 after using ChatGPT-generated case law. The problem? The case law they cited, which included quotes and other citations, didn’t exist.
“ChatGPT created the case law instead of finding the case law, and inadvertently, the lawyers started citing case law that was not a court decision,” Benjamin said, demonstrating that “if ChatGPT can’t find case law that’s favorable, it might go ahead and create case law that’s favorable.”
That doesn’t mean lawyers need to avoid the technology altogether, Benjamin says, but ethically using AI “requires competency to know the technology and how it works.”
And the legal concerns go beyond lawyers using AI chatbots.
In professions involving confidential information, workers also are limited in what information they can ethically provide to AI technology such as ChatGPT, says Alicia Samolis, a partner at
Partridge Snow & Hahn LLP.
“The risk is [inputted information] is going to be discoverable, so you’re typing in a question and it’s recording what you’re saying,” Samolis said.
Businesses that don’t work with highly sensitive information may not take issue with that, “but you really can’t do it in a legal industry where much of what we generate … is either attorney-client privilege or at least has proprietary information in it,” she said.
But Samolis remains optimistic about the technology’s overall potential and said businesses should adopt AI even if it invokes anxiety.
“Whether you like it or not, if you’re a business, you need to learn how to have it help your business and adapt to it,” she said, “because your competitors absolutely will.”
DARK SHIFT?
Fournaris hopes to see AI used in such a way to complement, rather than eliminate, jobs, particularly among small-business owners who can’t hire a large staff.
“A lot of [small businesses] struggle with maintaining a schedule, an online presence, blog posting,” Fournaris said, “so it is possible to take some of those tasks away from a business owner and allow the business owner to concentrate on the business.”
Others see it playing out differently. Richard McIntyre, a professor of economics at the
University of Rhode Island, said that while business owners may see financial savings through automating job functions, “AI is only going to exacerbate the root of not only our economic but our political problems.”
Like Parisi, McIntyre sees parallels to the Industrial Revolution – but he is thinking of workers’ struggles amid the massive societal shifts.
“Mechanization can make our lives better and easier, but it can also cause suffering,” McIntyre said. “Now, we’re going to see people who have invested time and money into education, which all of the sudden, very quickly is going to become devalued.”
And amid intense political polarization and numerous human rights crises, McIntrye doesn’t see the government as well-positioned to ease this transition.
In the past, the answer to people losing their livelihoods was to “dramatically expand social services,” McIntyre said. “That has been the answer in the past, that if there is a safety net that people can rely on, then the social consequences, the political consequences are less.
“That means though, that the state, and to some extent the nonprofit sector, has to do more,” he said.
‘HUMAN IN THE LOOP’
Long before ChatGPT, the maritime industry was using AI and machine learning to track such things as the fuel use of fleets of ships to suggest strategies to increase efficiencies.
Providence-based
Moran Shipping Agencies Inc. saw the potential benefits.
For more than 80 years, agents at the company have been assisting foreign-flagged vessels as they sail through U.S. waters and dock at 100 domestic ports. Then the AI disruption came.
Two years ago, the company purchased a Singapore-based tech startup called Spoolify to jump-start an effort by Moran to build an AI platform for the maritime market. It was rebranded Attender and spun off from Moran.
“In layman’s terms, it’s an Angie’s List of port services,” said Jason E. Kelly, Moran’s executive vice president.
In the shipping industry, a superintendent in another country might be tasked with overseeing five vessels throughout the world, Kelly says. If one of those vessels comes to Providence, for instance, in need of an unexpected repair or inspection, working out logistics could take weeks.
Instead, Attender uses AI to match the job with the best-qualified vendors instantly.
Kelly says workers needn’t fear losing their jobs to Attender.
“You really do need what they call a human in the loop,” Kelly said. “One of the most exciting parts of emerging technology isn’t AI, but it’s how humans interact with AI.
“I just don’t see, in the foreseeable future, the ability of AI to be able to manage the ever-changing, sophisticated jobs these vessels need when they’re in port,” he said.
At TIB Creative Studio, Barzyk also sees AI as lacking a certain sophistication. At least for now.
Shortly after AI exploded into the public consciousness late last year, she subscribed to OpenAI, as well as newsletters focused on the rapidly growing field of technology.
“I try to be ahead of the curve when it comes to technology to help clients in a quicker way and come up with more ideas for them,” Barzyk said.
And while TIB Creative initially lost a few clients to the image-generating AI software Canva, they soon came back, Barzyk says, reporting that the software didn’t live up to their expectations.
“AI definitely can’t [replace] creativity,” Barzyk said.
[gallery link="file" size="medium" td_select_gallery_slide="slide" ids="445442,445599,445600,445601"]