OpenAI Courts Brazil’s AI Potential, Urges Caution on Oversight
The maker of ChatGPT signals that rules being discussed in Congress could slow AI deployment.
Brazil is emerging as a leader in the development of artificial intelligence solutions but could miss out on opportunities if it creates regulations that are overly stringent, OpenAI’s head of Latin America and Caribbean policy said in an interview with Estadão.
The country is adopting artificial intelligence at a very fast rate and using it to address complex problems, Nicolas Andrade told the Brazilian online news site.
“One of the most incredible things … is that technology is being adopted in regions outside of Rio de Janeiro, São Paulo, Minas Gerais and Brasília,” said Andrade. Some of the fastest growth in ChatGPT use is coming from smaller and less affluent states like Tocantins, Amazonas, and Paraíba, he noted.
The company is launching programs that offer ChatGPT Plus for free, such as FavelaGPT in Rio de Janeiro and São Paulo and AmazonasGPT in Manaus for computer science students at the Universidade Federal de Manaus to seek solutions for Amazon rainforest conservation.
Ministers and presidents of developing countries frequently ask how they can build a group of software developers of the size that OpenAI has, Andrade says, adding that Brazil already has this. “It's not because of the country's size, nor the population. It's a characteristic of Brazil.”
Regulatory Risk
Asked about legislation in Brazil’s congress that would govern AI, Andrade said developers needed to be closely involved in the discussions to ensure the country can maximize opportunities.
The current risk-based proposal - which categorizes different AI technologies on the basis of risks they pose - must ensure that the categories are clearly defined and keep up with technology, he said.
“When we think about artificial intelligence, two, three, four jumps from where it is today, there will be fewer and fewer people able to monitor this type of information, this type of technology, and it's important that the categories be as technical as possible,” he said.
In some cases, the legislation is interchangeably using terms like “models” and “systems” - the former being comparable to an engine and the latter comparable to a car, he said — adding that it lacks nuance in the understanding of AI value chains and the different players involved at each level.
Brazil’s AI Moment
My take: Brazil has consistently struck me as the most active country in Latin America and the Caribbean around AI and seems poised to have more influence on the global AI debate than any other in the region. OpenAI weighing in publicly on both the opportunities and the risks signals how much is at stake.
Legislators are trying to strike a balance between the EU’s AI governance model that is centered heavily on government regulations and the US model that is built on industry standards and minimal state involvement. Brazil’s model seeks to balance growth and development with protection for workers and intellectual property.
Calling for greater involvement of software developers in the conversation is logical but also more likely to skew the conversation toward the technical implications of artificial intelligence rather than the labor market or social impact of this technology.
OpenAI has been clear — at least this year — that it wants minimal regulatory involvements from US authorities to guarantee American dominance in the AI race. It’s hard to expect it would seek anything less in Brazil.
LatAm Headlines
AI’s Brazil Crime Spree
Police arrested five people in São Paulo for using AI to collect on fake Uber rides that they never actually completed, a scheme that involved both drivers and passengers, according to CNN Brasil. Drivers would accept for short distances and then while on the road change the destination to another city or state and collect the higher fare without ever completing the ride.
Authorities in Rio Grande do Sul arrested three people and issued warrants for three others to capture the accused leaders of a scheme that used artificial intelligence to break into bank accounts of doctors, also from CNN Brasil. The accused recruited people who looked like the doctors in question, helping them get through biometric verification and then used AI to create false documents that helped them hack into their emails.
At the same time, São Paulo’s military police is creating an artificial intelligence lab for predictive analysis and pattern recognition that can support crime prevention, reports Exame.
AI Startup Seeks to Lower Travel Costs
Brazilian startup Voll has launched a new group of AI agents to help companies lower travel, targeting the Latin American market, reports Exame. Its first three agents help book the cheapest flights, find the lowest-cost lodging, and audit expense reports of corporate employees. Such tools have an obvious market in Brazil, where fees and surcharges are more widely used and tend to be higher than in other countries.
Macro Prompt
ChatGPT 5 drops with a thud
The much-awaited new release of ChatGPT fell short of expectations, with AI experts and superusers noting it tripped up on things that had been resolved in prior iterations without delivering any major breakthroughs, reports The Washington Post.
Some users complained that the technology felt less supportive than ChatGPT 4, which had been criticized for being sycophantic, encouraging patently bad ideas and sending users down sometimes dangerous rabbit holes.
The release fueled discussion of whether to postpone predictions on the arrival of AGI or Artificial General Intelligence, a concept of AI so advanced it can displace large amounts of human activity and even innovate on its own.
I find the AGI concept as such rather utopian, as apparently does research firm Gartner, which this summer predicted AI is heading toward what it calls the “trough of disillusionment” - when the hype cycle collapses and AI is deployed only where it can offer clear and immediate benefits. (I’m thinking the spring 2000 for the internet bubble).
AI In the College Era
AI is reinforcing the fact that the college experience is less and less about education, according to an interesting new piece in The Atlantic, which looks at how chatbots create shortcuts for students to spend more time on the extracurriculars that will get them ahead in the job world.
“Students have internalized the message that they should be racking up more achievements and experience: putting in clinical hours, publishing research papers, and leading clubs,” it reads.
A different version of the same thing is happening for college professors, reinforcing the age-old reality that teaching is a side-hustle for the main job of producing research, the story notes. Why reduce the time focusing on lectures and student recommendations to get more research published? It’s worth a read.
When AI Starts Eating Its Own Garbage
What happens when the world runs out of human text to train AI models? A Substack called The New Unhinged, a self-described combination of opinion, reporting, research, and sarcasm, has an interesting take on this.
With growing numbers of articles, research papers, and websites now at least partially generated by LLMs, AI is increasingly recycling its own content.
Symptoms of this problem include chatbots being unable to shake hallucinations no matter how many times corrected, and the increasing frequency of references to papers that don’t exist cited by other papers that also don’t exist.
The post offers some always-warranted AI skepticism from a source I’d not come across before.