<![CDATA[Tag: Artificial Intelligence – NBC Los Angeles]]> https://www.nbclosangeles.com/https://www.nbclosangeles.com/tag/artificial-intelligence/ Copyright 2024 https://media.nbclosangeles.com/2024/08/KNBC_station_logo_light.png?fit=276%2C58&quality=85&strip=all NBC Los Angeles https://www.nbclosangeles.com en_US Wed, 18 Sep 2024 21:02:11 -0700 Wed, 18 Sep 2024 21:02:11 -0700 NBC Owned Television Stations Gov. Newsom signs California laws to protect actors against unauthorized use of AI https://www.nbclosangeles.com/news/california-news/newsom-ai-protections-law-hollywood-actors/3513580/ 3513580 post 9660804 Getty Images https://media.nbclosangeles.com/2024/07/GAVIN-NEWSOM-GETTY-TLMD.jpg?quality=85&strip=all&fit=300,169 California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

“We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

“It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March.

Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

]]>
Tue, Sep 17 2024 01:27:10 PM Tue, Sep 17 2024 10:38:11 PM
How Intel's AI platforms can help identify untapped athletic talent https://www.nbclosangeles.com/news/local/how-intels-ai-platforms-can-help-identify-untapped-athletic-talent/3461010/ 3461010 post 9697167 https://media.nbclosangeles.com/2024/07/Intel_Article04_AdobeStock_743670717_CROP.jpg?quality=85&strip=all&fit=300,169

The following content is created in partnership with Intel. It does not reflect the work or opinions of the NBC Los Angeles editorial staff. Click here to learn more about Intel.

What is athletic talent? Where can we find it? And how can we make sure we don’t miss it?

It’s long been clear that current scouting methods miss a huge amount of athletic potential. Consider football (that’s soccer, for Americans), the world’s most popular sport, with more than 300 million athletes of all ages and skill levels playing, but only 130,000 elite and professional footballers. Finding the best players—who could be in any town or village around the world—has, traditionally, been like finding a needle in a haystack. It’s very likely that some of the most talented athletes have never and will never find their way to elite competition at all.

But what if finding that talent was as easy as capturing video on a smartphone?

Intel’s AI platforms, access and the future of sport

This past March, the International Olympic Committee (IOC) and Intel representatives toured five villages in Senegal, where they measured the physical and cognitive abilities of a thousand children—recording video during a series of jumping, speed, and strength drills. A video analytics system powered by Intel® AI platforms was able to identify 40 promising young athletes, who the Senegalese National Olympic Committee hopes to help train in advance of the Youth Olympic Games in Dakar 2026.

The system, which analyzes sporting performance entirely from video, begins with smartphone video, and upstream from that, the entire stack is made possible by Intel® AI platforms. On the backend, custom computer vision models run on servers powered by Intel® Gaudi®accelerators, making training fast, scalable, accessible and affordable. The 3D motion capture video analysis system can analyze up to 1,000 biomechanics data points, thanks to optimizations with OpenVINO™ and the power of Intel® Xeon® Scalable processors.

The point ultimately is that talent is shared equally, and opportunity is often not. “Certain areas of the world may have the best talent; however, these individuals may go undiscovered due to a lack of resource or opportunities,” says Caroline Rhoades, Intel Olympic Games Marketing Manager. “This is the gap we are hoping to bridge with Intel-powered AI sports performance technology.”

The AI platform can identify, analyze and engage talent faster than ever, helping level the playing field on a scale that hasn’t been possible in the past. Since the system is accessible via a free smartphone app, anyone that has access to a smartphone can capture and upload video, and access performance metrics to help them improve. Beyond that, with clubs and programs beginning to use the technology as part of their scouting efforts, it potentially broadens the pool of athletes beyond what would be possible with older, travel-intensive methods of assessing talent.

The beauty of this technology is that it’s accessible to everyone. You do not need to be a professional athlete to leverage this technology to improve your athletic ability.

Caroline Rhoades, Intel

Marginal gains for all

During the Olympic Games Paris 2024, guests can visit the Intel AI Platform Experience in collaboration with Samsung in Stade de France. The fan activation has five different training zones and allows fans to experience a taste of what it means to train as an elite athlete and getting some insight into their own athletic performance and potential.

“Visitors will have an opportunity to do a series of activities and compare themselves to key athletes and understand where their potential is,” says Sarah Vickers, head of Intel’s Olympic and Paralympic Games Program. “And beyond that, it will really help give the average person an idea of where are they most athletically inclined.”

“Utilizing a series of AI-driven drills, this innovative technology constructs a personalized athletic profile for every participant, aligning them with their optimal Olympic event,” says Rhoades.

Leveling the playing field

Beyond the Olympic Games, Intel AI platforms can automate and increase access to opportunities for the next wave of sports stars and future Olympian hopefuls. But the benefits can extend to all athletes, from fitness enthusiasts to top professionals.

The AI platform can provide performance analysis based entirely on camera input, helping serious athletes realize their goals. The video analytics system, which can capture and track metrics continually during training and play may not interfere with performance or distract athletes as much as sensors might.  

It’s also possible to capture critical data that might otherwise be missed. Rhoades said, “A professional coach told me that if this system can use video to predict an ankle injury in my top player, I can start their physical therapy immediately, prevent them from missing games, and possibly win a championship instead of having my star player on a bench.”


More from this series:

]]>
Mon, Sep 02 2024 05:30:00 AM Mon, Sep 02 2024 05:45:20 AM
Tom Hanks slams AI-generated wonder drug ads featuring his likeness https://www.nbclosangeles.com/entertainment/entertainment-news/tom-hanks-slams-ai-generated-wonder-drug-ads-featuring-his-likeness/3500041/ 3500041 post 9846612 Photo by Frederic J. Brown / AFP via Getty Images https://media.nbclosangeles.com/2024/08/GettyImages-1976116312.jpg?quality=85&strip=all&fit=300,200 Originally appeared on E! Online

Tom Hanks has a cautionary message for fans.

The Oscar winner shared a public service announcement to Instagram on Aug. 29, alerting fans to some AI-generated ads that he said were “falsely using my name, likeness, and voice promoting miracle cures and wonder drugs.”

Noting that the ads were created “without my consent, fraudulently, and through AI,” the type 2 diabetic shared that he solely works with “a board certified doctor” to treat the condition and cautioned others to avoid the unendorsed products.

“Don’t be fooled,” the 68-year-old wrote. “Don’t be swindled. Don’t lose your hard earned money.”

Back in 2013, the “Forrest Gump” star opened up about his Type 2 diabetes diagnosis.

“I went to the doctor and he said, ‘You know those high blood sugar numbers you’ve been dealing with since you were 36? Well, you’ve graduated,’” Hanks told David Letterman on “The Late Show.” “‘You’ve got Type 2 diabetes, young man.’”

Hanks — who shares son Colin Hanks, 45, and daughter Elizabeth Hanks, 42, with ex-wife Samantha Lewes and sons Chet Hanks, 34, and Truman Hanks, 28, with wife Rita Wilson — is no stranger to speaking out for medical causes.

In fact, in April, Hanks and Wilson stepped out as the honorary chairs of “An Unforgettable Evening” benefiting the Women’s Cancer Research Fund. The cause was a special one for the longtime couple, as Wilson underwent a double mastectomy for breast cancer in 2015.

“It takes a village,” Wilson told E! about supporting cancer research. “And this community in our town of Los Angeles, California has turned out for 25 years to support this cause. We don’t do it alone.”

Hanks credited his wife, whom he met on the set of the sitcom “Bosom Buddies” before marrying in 1988, with helping him organize his time so that he can devote his efforts to good causes.

“Periodically this lady sits me down,” he said, “and we pull out the books. We look at the year. We ponder the work that’s gotta get done.”

This story uses functionality that may not work in our app. Click here to open the story in your web browser.

]]>
Fri, Aug 30 2024 04:36:34 AM Sat, Aug 31 2024 07:50:43 AM
How Intel's AI platforms are making the Olympic and Paralympic Games more accessible https://www.nbclosangeles.com/news/local/how-intels-ai-platforms-are-making-the-olympic-and-paralympic-games-more-accessible/3461001/ 3461001 post 9697087 https://media.nbclosangeles.com/2024/07/Intel_Article03_AdobeStock_710264284_CROP.jpg?quality=85&strip=all&fit=300,169

The following content is created in partnership with Intel. It does not reflect the work or opinions of the NBC Los Angeles editorial staff. Click here to learn more about Intel.

Some 15,000 athletes and as many as 15 million spectators are expected at the Olympic and Paralympic Games this summer. That means countless things to discover among dozens of venues—and countless ways to get lost.

Wayfinding is something many of us take for granted nowadays. With the widespread availability of wayfinding applications that provide real time maps whether you’re navigating city traffic or blazing a trail through the mountains, it’s easy to forget that live maps don’t work indoors, since the relatively weak satellite signals are blocked by large structures. While that’s a problem for all users, it’s even more of a challenge for those who may be blind or vision impaired.

An indoor wayfinding solution, powered by Intel AI platforms, will be deployed at Paris 2024. But it’s only the beginning. The project has an ambitious goal of mapping all the world’s interior spaces to better serve anyone who’s ever needed to navigate an unfamiliar indoor space.

“We’re specifically looking at how Intel technology can help people with disabilities, but it’s a universal tool as well,” says Jocelyn Bourgault, Intel’s Paris 2024 Team USA and Accessibility Programs Lead. “Even people without disabilities can gain access to it and to its benefits.”

Wayfinding at the Olympic and Paralympic training sites

Two activations will be in place at the Olympic and Paralympic Games this summer, serving athletes and staff at the Team USA Training Site in Paris, and at the International Paralympic Committee headquarters in Bonn, Germany.

Within those venues, users will be able to experience rich indoor wayfinding via a smartphone app. With the ability to search locations, enter a specific destination, or explore predefined points of interest (POIs), sighted users can choose to follow a 3D camera view through the space with an overlay of arrows on a live map, while visually impaired users can choose a 2D map view, along with audio cues, or a dark view that relies entirely on audio.

Even more granular functionality can support wheelchair users who, for example, might want to choose a route that includes ramps and elevators rather than stairs, while someone on foot can select a shorter or faster path. And as with familiar mapping applications, users can set up a route that includes stops along the way, such as at a coffee kiosk on their way to a meeting.

“We’ve created paths that allow the athlete to find their way from the minute they get off the bus: let them find their coaches immediately, get into training if that’s where they’re headed or the hot tub for rehabilitation or the physical therapy room,” says Bourgault. “We’re hoping that the athletes are going to be able to use this app to really maximize their time and, you know, focus on getting that gold medal.”

Developed as part of Intel’s unified Environmental, Social and Governance efforts, the system is very different from indoor mapping systems that rely on Bluetooth beacons and has a far simpler setup. Rather than needing to place dozens of beacons in a space to provide reference points, the process requires only LiDAR scanning of a space. A walkthrough with a 360-degree Lidar Scanner is enough to capture the interior (depending on the size of the structure, that can take a team of surveyors a day).

The resulting data point cloud, capturing the geometry of the physical space, is sent to the cloud, where machine learning algorithms trained on large datasets and running on Intel® Xeon® processors translate that point cloud data into a digital twin of the space, identifying and categorizing objects and features along the way, faster and with much higher accuracy and precision than traditional methods. That data is sent back down to edge devices, where machine learning algorithms create the actual maps for users moving through the space. And it’s all powered by Intel® AI platforms from the Intel® distribution of OpenVINO™, an open source toolkit for deploying AI on systems powered by Intel® Xeon® processors.

“We’ve been working with our partners to showcase the performance of our CPUs using OpenVINO™ in their machine learning algorithms,” says Bourgault. “And so far, the test results are incredibly positive. OpenVINO™ plus Intel® Xeon® processors is a match made in heaven.”

Further scanning can add up-to-date navigation information as elements within the space change. For example, concession stands, bleachers, staging, and other temporary structures might be set up for one event and reconfigured for another. But those subsequent scans are much faster and less data intensive.

We’re specifically looking at how Intel technology can help people with disabilities, but it’s a universal tool as well. Even people without disabilities can gain access to it and to its benefits.

Jocelyn Bourgault, Intel

Finding the future of accessibility

The Olympic and Paralympic Games activations build on previous demonstrations of the technology at a university and an international airport.  And as more large-scale installations roll out, it’s clear that there are multitudes of possibilities unlocked by dependable indoor wayfinding—from transit to retail, healthcare, government, and education.

“It’s the curb-cut effect,” says Bourgault. “Think about how a curb cut helps a disabled person get from one sidewalk to another. But once they were adopted nationwide, people realized that they don’t just help people with disabilities, they help the mother with a stroller, the kid on a trike, the delivery driver with a cart of packages. And there are lots of examples where solutions that help people with a specific disability have wide-ranging benefits for the whole community.”

Future integrations will be deeper and broader. Collaborations with outdoor mapping providers will link indoor systems into the wider world of wayfinding, opening possibilities for use in transit systems, large campuses, and, of course, across all the competition and training venues of future Olympic and Paralympic Games. And they will require the sort of extremely fast, high-capacity processing capabilities such as what Intel® Xeon® processors provide.

“What we’d like to have,” says Bourgault, “is a handoff, or handshake, between this indoor navigation system and existing outdoor navigation systems. To really make the world accessible, you need collaborations to create that seamless interaction between the spaces.”

The incorporation of live data has even more potential. Sarah Vickers, head of Intel’s Olympic and Paralympic Games Program, envisions future applications that involve continual updates with live data, enabling actions such as adjusting suggested paths to account for crowded concessions or restroom lines or busy transit stops, or directing users to retail, informational, or cultural points of interest. There are countless opportunities for optimization and efficiency—and most importantly, to provide a better user experience. “There are so many different things you can do,” says Vickers. “If you can sync this with real time data. It’s going to be really helpful. The more we can feed into that to help people, the better off it’ll be.”


More from this series:

]]>
Mon, Aug 26 2024 09:00:53 AM Mon, Aug 26 2024 06:32:57 AM
Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? https://www.nbclosangeles.com/news/national-international/police-ai-chatbots-crime-reports/3495887/ 3495887 post 9832983 Getty Images https://media.nbclosangeles.com/2024/08/GettyImages-2080972792.jpg?quality=85&strip=all&fit=300,200 A body camera captured every word and bark uttered as police Sgt. Matt Gilmore and his K-9 dog, Gunner, searched for a group of suspects for nearly an hour.

Normally, the Oklahoma City police sergeant would grab his laptop and spend another 30 to 45 minutes writing up a report about the search. But this time he had artificial intelligence write the first draft.

Pulling from all the sounds and radio chatter picked up by the microphone attached to Gilbert’s body camera, the AI tool churned out a report in eight seconds.

“It was a better report than I could have ever written, and it was 100% accurate. It flowed better,” Gilbert said. It even documented a fact he didn’t remember hearing — another officer’s mention of the color of the car the suspects ran from.

Oklahoma City’s police department is one of a handful to experiment with AI chatbots to produce the first drafts of incident reports. Police officers who’ve tried it are enthused about the time-saving technology, while some prosecutors, police watchdogs and legal scholars have concerns about how it could alter a fundamental document in the criminal justice system that plays a role in who gets prosecuted or imprisoned.

Built with the same technology as ChatGPT and sold by Axon, best known for developing the Taser and as the dominant U.S. supplier of body cameras, it could become what Gilbert describes as another “game changer” for police work.

“They become police officers because they want to do police work, and spending half their day doing data entry is just a tedious part of the job that they hate,” said Axon’s founder and CEO Rick Smith, describing the new AI product — called Draft One — as having the “most positive reaction” of any product the company has introduced.

“Now, there’s certainly concerns,” Smith added. In particular, he said district attorneys prosecuting a criminal case want to be sure that police officers — not solely an AI chatbot — are responsible for authoring their reports because they may have to testify in court about what they witnessed.

“They never want to get an officer on the stand who says, well, ‘The AI wrote that, I didn’t,’” Smith said.

AI technology is not new to police agencies, which have adopted algorithmic tools to read license plates, recognize suspects’ faces, detect gunshot sounds and predict where crimes might occur. Many of those applications have come with privacy and civil rights concerns and attempts by legislators to set safeguards. But the introduction of AI-generated police reports is so new that there are few, if any, guardrails guiding their use.

Concerns about society’s racial biases and prejudices getting built into AI technology are just part of what Oklahoma City community activist aurelius francisco finds “deeply troubling” about the new tool, which he learned about from The Associated Press.

“The fact that the technology is being used by the same company that provides Tasers to the department is alarming enough,” said francisco, a co-founder of the Foundation for Liberating Minds in Oklahoma City.

He said automating those reports will “ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.”

Before trying out the tool in Oklahoma City, police officials showed it to local prosecutors who advised some caution before using it on high-stakes criminal cases. For now, it’s only used for minor incident reports that don’t lead to someone getting arrested.

“So no arrests, no felonies, no violent crimes,” said Oklahoma City police Capt. Jason Bussert, who handles information technology for the 1,170-officer department.

That’s not the case in another city, Lafayette, Indiana, where Police Chief Scott Galloway told the AP that all of his officers can use Draft One on any kind of case and it’s been “incredibly popular” since the pilot began earlier this year.

Or in Fort Collins, Colorado, where police Sgt. Robert Younger said officers are free to use it on any type of report, though they discovered it doesn’t work well on patrols of the city’s downtown bar district because of an “overwhelming amount of noise.”

Along with using AI to analyze and summarize the audio recording, Axon experimented with computer vision to summarize what’s “seen” in the video footage, before quickly realizing that the technology was not ready.

“Given all the sensitivities around policing, around race and other identities of people involved, that’s an area where I think we’re going to have to do some real work before we would introduce it,” said Smith, the Axon CEO, describing some of the tested responses as not “overtly racist” but insensitive in other ways.

Those experiments led Axon to focus squarely on audio in the product unveiled in April during its annual company conference for police officials.

The technology relies on the same generative AI model that powers ChatGPT, made by San Francisco-based OpenAI. OpenAI is a close business partner with Microsoft, which is Axon’s cloud computing provider.

“We use the same underlying technology as ChatGPT, but we have access to more knobs and dials than an actual ChatGPT user would have,” said Noah Spitzer-Williams, who manages Axon’s AI products. Turning down the “creativity dial” helps the model stick to facts so that it “doesn’t embellish or hallucinate in the same ways that you would find if you were just using ChatGPT on its own,” he said.

Axon won’t say how many police departments are using the technology. It’s not the only vendor, with startups like Policereports.ai and Truleo pitching similar products. But given Axon’s deep relationship with police departments that buy its Tasers and body cameras, experts and police officials expect AI-generated reports to become more ubiquitous in the coming months and years.

Before that happens, legal scholar Andrew Ferguson would like to see more of a public discussion about the benefits and potential harms. For one thing, the large language models behind AI chatbots are prone to making up false information, a problem known as hallucination that could add convincing and hard-to-notice falsehoods into a police report.

“I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing,” said Ferguson, a law professor at American University working on what’s expected to be the first law review article on the emerging technology.

Ferguson said a police report is important in determining whether an officer’s suspicion “justifies someone’s loss of liberty.” It’s sometimes the only testimony a judge sees, especially for misdemeanor crimes.

Human-generated police reports also have flaws, Ferguson said, but it’s an open question as to which is more reliable.

For some officers who’ve tried it, it is already changing how they respond to a reported crime. They’re narrating what’s happening so the camera better captures what they’d want to put in writing.

As the technology catches on, Bussert expects officers will become “more and more verbal” in describing what’s in front of them.

After Bussert loaded the video of a traffic stop into the system and pressed a button, the program produced a narrative-style report in conversational language that included dates and times, just like an officer would have typed from his notes, all based on audio from the body camera.

“It was literally seconds,” Gilmore said, “and it was done to the point where I was like, ‘I don’t have anything to change.’”

At the end of the report, the officer must click a box that indicates it was generated with the use of AI.

—————

O’Brien reported from Providence, Rhode Island

—————

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

]]>
Mon, Aug 26 2024 06:16:09 AM Mon, Aug 26 2024 06:17:13 AM
Jenna Ortega says she deleted Twitter after seeing explicit AI images of herself as a minor https://www.nbclosangeles.com/entertainment/entertainment-news/jenna-ortega-deleted-twitter-after-seeing-explicit-ai-images-minor/3495915/ 3495915 post 9833005 Dimitrios Kambouris/Getty Images https://media.nbclosangeles.com/2024/08/GettyImages-1471785172.jpg?quality=85&strip=all&fit=300,170 California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

“We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

“It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March.

Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

]]>
Mon, Aug 26 2024 05:55:38 AM Mon, Aug 26 2024 05:56:07 AM
Can AI truly replicate the screams of a man on fire? Video game performers want their work protected https://www.nbclosangeles.com/news/local/can-ai-truly-replicate-the-screams-of-a-man-on-fire-video-game-performers-want-their-work-protected/3490557/ 3490557 post 9732092 Chris DELMAS / AFP) (Photo by CHRIS DELMAS/AFP via Getty Images https://media.nbclosangeles.com/2024/07/GettyImages-2162939279.jpg?quality=85&strip=all&fit=300,195 For hours, motion capture sensors tacked onto Noshir Dalal’s body tracked his movements as he unleashed aerial strikes, overhead blows and single-handed attacks that would later show up in a video game. He eventually swung the sledgehammer gripped in his hand so many times that he tore a tendon in his forearm. By the end of the day, he couldn’t pull the handle of his car door open.

The physical strain this type of motion work entails, and the hours put into it, are part of the reason why he believes all video-game performers should be protected equally from the use of unregulated artificial intelligence.

Video game performers say they fear AI could reduce or eliminate job opportunities because the technology could be used to replicate one performance into a number of other movements without their consent. That’s a concern that led the Screen Actors Guild-American Federation of Television and Radio Artists to go on strike in late July.

“If motion-capture actors, video-game actors in general, only make whatever money they make that day … that can be a really slippery slope,” said Dalal, who portrayed Bode Akuna in “Star Wars Jedi: Survivor.” “Instead of being like, ‘Hey, we’re going to bring you back’ … they’re just not going to bring me back at all and not tell me at all that they’re doing this. That’s why transparency and compensation are so important to us in AI protections.”

Hollywood’s video game performers announced a work stoppage — their second in a decade — after more than 18 months of negotiations over a new interactive media agreement with game industry giants broke down over artificial intelligence protections. Members of the union have said they are not anti-AI. The performers are worried, however, the technology could provide studios with a means to displace them.

Dalal said he took it personally when he heard that the video game companies negotiating with SAG-AFTRA over a new contract wanted to consider some movement work “data” and not performance.

If gamers were to tally up the cut scenes they watch in a game and compare them with the hours they spend controlling characters and interacting with non-player characters, they would see that they interact with “movers’” and stunt performers’ work “way more than you interact with my work,” Dalal said.

“They are the ones selling the world these games live in, when you’re doing combos and pulling off crazy, super cool moves using Force powers, or you’re playing Master Chief, or you’re Spider-Man swinging through the city,” he said.

Some actors argue that AI could strip less-experienced actors of the chance to land smaller background roles, such as non-player characters, where they typically cut their teeth before landing larger jobs. The unchecked use of AI, performers say, could also lead to ethical issues if their voices or likenesses are used to create content that they do not morally agree with. That type of ethical dilemma has recently surfaced with game “mods,” in which fans alter and create new game content. Last year, voice actors spoke out against such mods in the role-playing game “Skyrim,” which used AI to generate actors’ performances and cloned their voices for pornographic content.

In video game motion capture, actors wear special Lycra or neoprene suits with markers on them. In addition to more involved interactions, actors perform basic movements like walking, running or holding an object. Animators grab from those motion capture recordings and chain them together to respond to what someone playing the game is doing.

“What AI is allowing game developers to do, or game studios to do, is generate a lot of those animations automatically from past recordings,” said Brian Smith, an assistant professor at Columbia University’s Department of Computer Science. “No longer do studios need to gather new recordings for every single game and every type of animation that they would like to create. They can also draw on their archive of past animation.”

If a studio has motion capture banked from a previous game and wants to create a new character, he said, animators could use those stored recordings as training data.

“With generative AI, you can generate new data based on that pattern of prior data,” he said.

A spokesperson for the video game producers, Audrey Cooling, said the studios offered “meaningful” AI protections, but SAG-AFTRA’s negotiating committee said that the studios’ definition of who constitutes a “performer” is key to understanding the issue of who would be protected.

“We have worked hard to deliver proposals with reasonable terms that protect the rights of performers while ensuring we can continue to use the most advanced technology to create a great gaming experience for fans,” Cooling said. “We have proposed terms that provide consent and fair compensation for anyone employed under the (contract) if an AI reproduction or digital replica of their performance is used in games.”

The game companies offered wage increases, she said, with an initial 7% increase in scale rates and an additional 7.64% increase effective in November. That’s an increase of 14.5% over the life of the contract. The studios had also agreed to increases in per diems, payment for overnight travel and a boost in overtime rates and bonus payments, she added.

“Our goal is to reach an agreement with the union that will end this strike,” Cooling said.

A 2023 report on the global games market from industry tracker Newzoo predicted that video games would begin to include more AI-generated voices, similar to the voice acting in “High on Life” from Squanch Games. Game developers, the Amsterdam-based firm said, will use AI to produce unique voices, bypassing the need to source voice actors.

“Voice actors may see fewer opportunities in the future, especially as game developers use AI to cut development costs and time,” the report said, noting that “big AAA prestige games like ‘The Last of Us’ and ‘God of War’ use motion capture and voice acting similarly to Hollywood.”

Other games, such as “Cyberpunk 2077,” cast celebrities.

Actor Ben Prendergast said that data points collected for motion capture don’t pick up the “essence” of someone’s performance as an actor. The same is true, he said, of AI-generated voices that can’t deliver the nuanced choices that go into big scenes — or smaller, strenuous efforts like screaming for 20 seconds to portray a character’s death by fire.

“The big issue is that someone, somewhere has this massive data, and I now have no control over it,” said Prendergast, who voices Fuse in the game “Apex Legends.” “Nefarious or otherwise, someone can pick up that data now and go, we need a character that’s nine feet tall, that sounds like Ben Prendergast and can fight this battle scene. And I have no idea that that’s going on until the game comes out.”

Studios would be able to “get away with that,” he said, unless SAG-AFTRA can secure the AI protections they are fighting for.

“It reminds me a lot of sampling in the ‘80s and ’90s and 2000s where there were a lot of people getting around sampling classic songs,” he said. “This is an art. If you don’t protect rights over their likeness, or their voice or body and walk now, then you can’t really protect humans from other endeavors.”

]]>
Sun, Aug 18 2024 03:49:12 PM Sun, Aug 18 2024 03:49:30 PM
Wyoming reporter caught using artificial intelligence to create fake quotes and stories https://www.nbclosangeles.com/news/national-international/wyoming-reporter-caught-using-artificial-intelligence-to-create-fake-stories/3487835/ 3487835 post 9803043 AP Photo/Mead Gruver https://media.nbclosangeles.com/2024/08/AP24226724617281.jpg?quality=85&strip=all&fit=300,169 A quote from Wyoming’s governor and a local prosecutor were the first things that seemed slightly off to Powell Tribune reporter CJ Baker. Then, it was some of the phrases in the stories that struck him as nearly robotic.

The dead giveaway, though, that a reporter from a competing news outlet was using generative artificial intelligence to help write his stories came in a June 26 article about the comedian Larry the Cable Guy being chosen as the grand marshal of the Cody Stampede Parade.

“The 2024 Cody Stampede Parade promises to be an unforgettable celebration of American independence, led by one of comedy’s most beloved figures,” the Cody Enterprise reported. “This structure ensures that the most critical information is presented first, making it easier for readers to grasp the main points quickly.”

After doing some digging, Baker, who has been a reporter for more than 15 years, met with Aaron Pelczar, a 40-year-old who was new to journalism and who Baker says admitted that he had used AI in his stories before he resigned from the Enterprise.

The publisher and editor at the Enterprise, which was co-founded in 1899 by Buffalo Bill Cody, have since apologized and vowed to take steps to ensure it never happens again. In an editorial published Monday, Enterprise Editor Chris Bacon said he “failed to catch” the AI copy and false quotes.

“It matters not that the false quotes were the apparent error of a hurried rookie reporter that trusted AI. It was my job,” Bacon wrote. He apologized that “AI was allowed to put words that were never spoken into stories.”

Journalists have derailed their careers by making up quotes or facts in stories long before AI came about. But this latest scandal illustrates the potential pitfalls and dangers that AI poses to many industries, including journalism, as chatbots can spit out spurious if somewhat plausible articles with only a few prompts.

AI has found a role in journalism, including in the automation of certain tasks. Some newsrooms, including The Associated Press, use AI to free up reporters for more impactful work, but most AP staff are not allowed to use generative AI to create publishable content.

The AP has been using technology to assist in articles about financial earnings reports since 2014, and more recently for some sports stories. It is also experimenting with an AI tool to translate some stories from English to Spanish. At the end of each such story is a note that explains technology’s role in its production.

Being upfront about how and when AI is used has proven important. Sports Illustrated was criticized last year for publishing AI-generated online product reviews that were presented as having been written by reporters who didn’t actually exist. After the story broke, SI said it was firing the company that produced the articles for its website, but the incident damaged the once-powerful publication’s reputation.

In his Powell Tribune story breaking the news about Pelczar’s use of AI in articles, Baker wrote that he had an uncomfortable but cordial meeting with Pelczar and Bacon. During the meeting, Pelczar said, “Obviously I’ve never intentionally tried to misquote anybody” and promised to “correct them and issue apologies and say they are misstatements,” Baker wrote, noting that Pelczar insisted his mistakes shouldn’t reflect on his Cody Enterprise editors.

After the meeting, the Enterprise launched a full review of all of the stories Pelczar had written for the paper in the two months he had worked there. They have discovered seven stories that included AI-generated quotes from six people, Bacon said Tuesday. He is still reviewing other stories.

“They’re very believable quotes,” Bacon said, noting that the people he spoke to during his review of Pelczar’s articles said the quotes sounded like something they’d say, but that they never actually talked to Pelczar.

Baker reported that seven people told him that they had been quoted in stories written by Pelczar, but had not spoken to him.

Pelczar did not respond to an AP phone message left at a number listed as his asking to discuss what happened. Bacon said Pelczar declined to discuss the matter with another Wyoming newspaper that had reached out.

Baker, who regularly reads the Enterprise because it’s a competitor, told the AP that a combination of phrases and quotes in Pelczar’s stories aroused his suspicions.

Pelczar’s story about a shooting in Yellowstone National Park included the sentence: “This incident serves as a stark reminder of the unpredictable nature of human behavior, even in the most serene settings.”

Baker said the line sounded like the summaries of his stories that a certain chatbot seems to generate, in that it tacks on some kind of a “life lesson” at the end.

Another story — about a poaching sentencing — included quotes from a wildlife official and a prosecutor that sounded like they came from a news release, Baker said. However, there wasn’t a news release and the agencies involved didn’t know where the quotes had come from, he said.

Two of the questioned stories included fake quotes from Wyoming Gov. Mark Gordon that his staff only learned about when Baker called them.

“In one case, (Pelczar) wrote a story about a new OSHA rule that included a quote from the Governor that was entirely fabricated,” Michael Pearlman, a spokesperson for the governor, said in an email. “In a second case, he appeared to fabricate a portion of a quote, and then combined it with a portion of a quote that was included in a news release announcing the new director of our Wyoming Game and Fish Department.”

The most obvious AI-generated copy appeared in the story about Larry the Cable Guy that ended with the explanation of the inverted pyramid, the basic approach to writing a breaking news story.

It’s not difficult to create AI stories. Users could put a criminal affidavit into an AI program and ask it to write an article about the case including quotes from local officials, said Alex Mahadevan, director of a digital media literacy project at the Poynter Institute, the preeminent journalism think tank.

“These generative AI chatbots are programmed to give you an answer, no matter whether that answer is complete garbage or not,” Mahadevan said.

Megan Barton, the Cody Enterprise’s publisher, wrote an editorial calling AI “the new, advanced form of plagiarism and in the field of media and writing, plagiarism is something every media outlet has had to correct at some point or another. It’s the ugly part of the job. But, a company willing to right (or quite literally write) these wrongs is a reputable one.”

Barton wrote that the newspaper has learned its lesson, has a system in place to recognize AI-generated stories and will “have longer conversations about how AI-generated stories are not acceptable.”

The Enterprise didn’t have an AI policy, in part because it seemed obvious that journalists shouldn’t use it to write stories, Bacon said. Poynter has a template from which news outlets can build their own AI policy.

Bacon plans to have one in place by the end of the week.

“This will be a pre-employment topic of discussion,” he said.

]]>
Wed, Aug 14 2024 04:40:36 AM Wed, Aug 14 2024 05:09:05 AM
Older Americans prepare themselves for a world altered by artificial intelligence https://www.nbclosangeles.com/news/national-international/older-americans-prepare-themselves-for-a-world-altered-by-artificial-intelligence/3486777/ 3486777 post 9799624 BSIP/Universal Images Group via Getty Images https://media.nbclosangeles.com/2024/08/GettyImages-925865218.jpg?quality=85&strip=all&fit=300,200 California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

“We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

“It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March.

Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

]]>
Mon, Aug 12 2024 11:34:40 PM Tue, Aug 13 2024 03:07:05 AM
Hollywood icons of the past take new star turn, with celebrity estates cashing in on AI voice cloning deals https://www.nbclosangeles.com/news/business/money-report/hollywood-icons-of-the-past-take-new-star-turn-with-celebrity-estates-cashing-in-on-ai-voice-cloning-deals/3485478/ 3485478 post 9795574 NBC | NBCUniversal | Getty Images https://media.nbclosangeles.com/2024/08/108013136-1722280116223-108013136-1722279843227-gettyimages-138388815-NUP_109592_0112.jpeg?quality=85&strip=all&fit=300,176
  • Thirty minutes of voice from a Hollywood movie star is enough to create a “professional voice clone.”
  • ElevenLabs, backed by prominent Silicon Valley venture capital firms, has penned multiple deals with the estates of legendary actors including Burt Reynolds, Judy Garland, James Dean and Sir Laurence Olivier for a reading app.
  • Stars from Hollywood’s golden age are being reborn through celebrity estate AI voice cloning deals, a sign of how some of the “Wild West” concerns about unauthorized AI impersonation are being addressed by new business models.

    ElevenLabs, an audio technology startup funded by venture capital firms including Andreessen Horowitz and Sequoia has penned multiple deals with the estates of legendary actors for its IconicVoices tool that allows users to have AI-generated voices read to them via an audiobook app. The stars include Burt Reynolds, Judy Garland, James Dean and Sir Laurence Olivier.

    ElevenLabs, which launched in 2023, creates audio for books and news articles, video game characters, film pre-production, and social media and advertising. The company already works with publishers including the New York Times and Washington Post and earlier this year, the company was selected by Disney to join its accelerator program.

    “You need around 30 minutes of high-quality audio to create a professional voice clone,” said Sam Sklar, a member of ElevenLabs’ growth team, and the voices are generated from the celebrity’s catalog. Once created, it can be called upon to read text (articles, PDFs, ePubs, newsletters, or other text content). However, the voice and content are not able to be exported, with all of the listening in a reading app. 

    A user could, for instance, have articles narrated to them by James Dean within the app, but users cannot access the voices for any content not already in the app. 

    These kinds of deals could help set the boundaries for a future in which AI-generated voice content is less contentious and more of a controlled, curated terrain. Google Play and Apple Books utilize AI-generated voices to some extent already, though there are high hurdles to recreating human voice pacing, intonation and emotion.

    The AI industry has been plagued by concerns about use of celebrity voices, with OpenAI doing an about-face in May after actress Scarlett Johansson accused the company of ripping off her voice after she rejected offers to license it.

    “We’re very alive to the risks associated with synthetic media and take the safe use of our tools incredibly seriously,” Sklar said. Safeguards include active moderation of content, accountability enforceable with bans, and special provisions for safeguarding the impact of AI voice on the 2024 election

    Among the current generation of actors, there remains significant anxiety surrounding the use of AI in generating voice content. Voice actors for video games have raised concerns, and last year’s film and television strike had significant roots in anxieties over the use of AI. The use of iconic voices sold by estates is a market niche that potentially avoids these pitfalls, representing a new income stream from AI rather than a lost income stream because of AI.

    The use of soundalike celebrity voices is an issue that predates AI, such as the 1988 case of Frito Lay using a Tom Waits soundalike in their ads, and another Waits’ case in 2007, after Waits himself had long refused advertising deals. AI presents an easier path to creating soundalikes, and recent lawsuits levied against AI startup Lovo for allegedly inappropriate and uncompensated use of voice actors in generating its AI voices is a reminder that the world of AI voice generation is likely to some degree to remain a complicated, litigious one. (Lovo has denied the claims in the suit and also pointed to a revenue-sharing model it offers actors for cloned voices.)

    It’s difficult to assess the protections in places without reviewing the specific language of the IconicVoices contracts, said Steve Cohen, a partner at Pollock & Cohen who is representing voice actors in an unrelated lawsuit alleging cloning of voices without permission.

    ElevenLabs points to the way that its IconicVoices tool attains permissions and curates usage of the voices. 

    “Giving permission for using one’s voice is one of the basics,” Cohen said. “I think the key factors are permission, compensation, and control.”

    New, clearer laws may also be a disincentive to people tempted to improperly appropriate a voice, “not for hardcore bad guys, but for edge cases,” Cohen said. But quoting Bette Davis in “All About Eve,” he added, “‘Buckle your seatbelts; it’s going to be a bumpy ride.'” 

    How realistic cloned voices sound is also an evolving issue. Many experts say that because AI doesn’t “know” what it’s saying, performance quality is limited. Sklar said ElevenLabs’ latest level of speech quality is indistinguishable from real human speech. “The text-to-speech tools from ElevenLabs can understand the context of the words,” he said.

    AI is only as good as the models on which it is trained, and the actors’ voice datasets become part of the process.

    “Neural models derive their capabilities from mimicking/memorizing nuances and patterns present in their training data,” said Nauman Dawalatabad, a postdoctoral associate at the MIT Computer Science and Artificial Intelligence Laboratory with extensive research in AI voice generation. “The quality and diversity of training data significantly influence the model’s performance.” 

    The vocal delivery of movie stars could add to the AI mimicry and learning by providing the kind of “high-quality voice datasets for training and fine-tuning large models” that Dawalatabad said is essential to the process. But he expressed reservations about “sounding human” as being the right test for the AI voice field, as that could reinforce an antagonistic relationship between human and synthetic voicings.

    Voice actors remain divided on the technology, with some refusing to consider any deals but others saying opportunities to clone their voices for speedier, cheaper production on some forms of audiobooks can’t be ignored. “AI technology can help workflows. AI is not a new tool for voice talent, producers, and publishers, many of whom use it to improve their quality control in post-production,” Michele Cobb, executive director of the Audio Publishers Association, told CNBC last year.

    Recent generative models have shown substantial advancements compared to earlier iterations, making it increasingly difficult to distinguish between fake and authentic voices by ear alone, according to Dawalatabad. AI voice licensing could alleviate workload for voice actors, he added, without supplanting them, as they “intercede in the process by focusing on offering correction or enhancement to ineffable aspects such as intonation, warmth, and emphasis, which still present challenges.”  

    ]]>
    Sun, Aug 11 2024 10:57:06 AM Sun, Aug 11 2024 01:06:12 PM
    How Intel's AI platforms are helping revolutionize the Olympic Games broadcast experience https://www.nbclosangeles.com/news/local/how-intels-ai-platforms-are-helping-revolutionize-the-olympic-games-broadcast-experience/3460997/ 3460997 post 9697078 https://media.nbclosangeles.com/2024/07/Intel_Article02_AdobeStock_832623627_CROP.jpg?quality=85&strip=all&fit=300,169

    The following content is created in partnership with Intel. It does not reflect the work or opinions of the NBC Los Angeles editorial staff. Click here to learn more about Intel.

    For nearly a century, the modern Olympic Games have showcased technological innovations alongside human achievement, and broadcast advances have played a big part in that. The Olympic Games Berlin 1936 was the first televised sporting event (“broadcast” over closed-circuit to remote venues); the Olympic Winter Games in 1960 introduced the “instant replay;” and Tokyo 1964—the “TV Olympics”—were the first to be broadcast internationally by satellite (via Syncom 3, the first geostationary communications satellite).

    The  Olympic Games Paris 2024 build on that legacy with the debut of the first-ever end-to-end 8K livestreaming experience using VVC (Versatile Video Coding) standard to selected locations spanning four continents, and an automatic highlights generation system that will allow Olympic Broadcasting Services to deliver an unprecedented amount of custom content worldwide—both powered by Intel® AI platforms.

    Automatic highlights generation: custom content everywhere

    If you’re an Olympic or Paralympic Games fan who lives for a sport like badminton, table tennis or cycling, you know the feeling of staying up until the wee hours of the morning to catch a final match, or you’ve made do with a few seconds of highlights boiled down from the hours of grueling competition you’ve been looking forward to for four years.

    The challenge comes down to scale. The Olympic Games is the biggest sporting event in the world, and the sheer amount of footage is truly overwhelming. OBS plans to capture more than 11,000 hours of content at Paris 2024—that’s the equivalent of 458 full days, produced over just 17 days of competition. That’s far more footage than can be broadcast.

    “A lot of times you have to prioritize what’s going to get the most eyeballs, and that really undermines some sports from getting the coverage that they deserve,” says Courtney Willock, Head of Broadcast Technology, Intel Olympic & Paralympic Games Office. Our platform helps ensure that we are giving the right content to the right audience, depending on the interests of each market.”

    That challenge changes starting with the Paris Games, where Intel’s AI platform for broadcast production and editing will enable automatic highlights generation that can deliver custom content focused on the sports and athletes that fans in each region care about. The technology can also free human editors from the time-consuming tasks of logging, enabling them to tell the human stories that drive lifelong engagement with the Olympic Games. Using the platform, OBS can deliver localized content to rights holders—and audiences—in more than 200 countries and territories.

    “The platform takes three sources of information: video, audio and data,” says Willock, “and then using AI, identifies events and the actions of individual competitors to make a decision on what is the most relevant based on criteria that are set—and the criteria that are allowed in the platform are extensive.”

    That means unprecedented flexibility, with producers able to fine-tune the system on the fly, even without specialized training. “We’re automating things that otherwise wouldn’t have coverage, wouldn’t have highlights,” Willock tells us. “Now you can say I want to be able to serve all these different appetites; you just set criteria for all of those ahead of time with a few clicks.”

    We are using the Olympics as a gateway to solve some of the most complex challenges in the technical world. Intel is one of the only companies on the planet that can do it end-to-end because we have data center to client to edge and everything in between to define the future of the broadcast industry.

    Ravindra Velhal, Intel

    8K video: detail that beats the naked eye

    Ever lose a contact lens? Ever lose a contact lens in the heat of elite judo competition during the Olympic Games?

    “In Tokyo, when the first judo match started,” says Intel Global Content Strategist Ravindra Velhal, “one of the judoka dropped her contact lens on the tatami mat during the competition. And not only did the 8K camera capture that contact lens, but we could actually see markings on that contact lens.”

    The episode underscores the ultimate promise of 8K video: staggeringly high resolution. That means more lifelike reproduction with unprecedented realism and detail—enough detail to spot a lost contact lens from across the world.

    All that resolution means a ton of data—an 8K image is 7,680 pixels wide by 4,320 pixels tall: that’s four times the resolution of a 4K image, which in turn has four times the resolution of a standard HD frame. It adds up to 33 million pixels, and at 60 frames per second with HDR (High Dynamic Range) 10 bit and multi-channel immersive audio, that’s up to 48 Gbps per second RAW—a serious challenge for livestreaming.

    The raw signal from OBS’s 8K cameras goes up to the Intel Broadcast Server at IBC, where it’s encoded on servers powered by Intel® Xeon® processors, using Intel® AMX AI accelerator and Deep Learning Boost (Intel® DL Boost) technology. The workflow is able to compress that incoming 48 Gbps 8K stream on the fly (the process takes within 200-400 milliseconds) to a 40-60 megabyte per second stream for distribution using the latest H.266/VVC (Versatile Video Coding) standards.

    “We get the signal at 48 Gigabit per second RAW from OBS,” says Velhal, “and from there, everything downstream we basically handle. Technically, we can take any 8K live sports and events feed directly to our server-based workflow and deliver globally on our own.”

    The world is still at the beginning of 8K adoption, and while outside of the Japanese market there aren’t many 8K OTT devices yet, viewers with select Intel-powered desktops—powered by the 14th Gen Intel® Core™ I9 with Intel® Arc GPU, Intel® Core™ Ultra processors-based laptops connected to 8K TV—are in luck. Those machines are capable enough to decode 8K content, enabling viewers to experience the full, rich detail of the broadcasts of the future right now.

    Looking to the future: beyond the biggest stage

    The Olympic Games—the world’s biggest athletic stage—offers a jumping-off point for future innovations. These advances pave the way for a better, richer, more immersive, more personalized experience for sports fans everywhere—and for the future of television.

    “We are using the Olympics as a gateway to solve some of the most complex challenges in the technical world,” says Velhal. “Intel is one of the only companies on the planet that can do it end-to-end because we have data center to client to edge and everything in between to define the future of the broadcast industry.”

    “The automatic highlights generation platform is already well-established around the world, but this is the first time it’s being applied at the Olympic Games,” says Willock. “It’s a great testimonial for how it manages diversity of competition, diversity of events, and simultaneous content.”

    “There’s a real opportunity,” says Sarah Vickers, leader of Intel’s Olympic and Paralympic Games Program, “to show the breadth of our technology capabilities and then take those examples and show our customers and partners how we can scale. The opportunities are extensive for getting much more customized content to that fan at home. And if you think about some countries that don’t have big broadcast budgets, they’ll be able to get customized AI feeds that let them tell that country’s story for the day. And that’s content that might not have been produced before.”


    More from this series:

    ]]>
    Tue, Aug 06 2024 07:00:00 AM Wed, Aug 07 2024 01:14:12 PM
    OpenAI co-founder John Schulman says he will leave and join rival Anthropic https://www.nbclosangeles.com/news/business/money-report/openai-co-founder-john-schulman-says-he-will-leave-and-join-rival-anthropic/3480022/ 3480022 post 9772540 Gabby Jones | Bloomberg | Getty Images https://media.nbclosangeles.com/2024/08/108016585-1722909580770-gettyimages-1247992354-OPENAI_CHATGPT.jpeg?quality=85&strip=all&fit=300,176
  • John Schulman worked to refine models that go into OpenAI’s ChatGPT chatbot.
  • After two safety leaders left, the startup said Schulman would join a safety and security committee.
  • Schulman said OpenAI executives have been committed to the area.
  • OpenAI co-founder John Schulman said in a Monday X post that he would leave the Microsoft-backed company and join Anthropic, an artificial intelligence startup with funding from Amazon.

    The move comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks.

    Schulman had been a co-leader of OpenAI’s post-training team that refined AI models for the ChatGPT chatbot and a programming interface for third-party developers, according to a biography on his website. In June, OpenAI said Schulman, as head of alignment science, would join a safety and security committee that would provide advice to the board. Schulman has only worked at OpenAI since receiving a Ph.D. in computer science in 2016 from the University of California, Berkeley.

    “This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work,” Schulman wrote in the social media post.

    He said he wasn’t leaving because of a lack of support for new work on the topic at OpenAI.

    “On the contrary, company leaders have been very committed to investing in this area,” he said.

    The leaders of the superalignment team, Jan Leike and company co-founder Ilya Sutskever, both left this year. Leike joined Anthropic, while Sutskever said he was helping to start a new company, Safe Superintelligence Inc.

    Since OpenAI staff members established Anthropic in 2021, the two young San Francisco-based businesses have been battling to have the most performant generative AI models that can come up with human-like text. Amazon, Google and Meta have also developed large language models.

    “Very excited to be working together again!” Leike wrote in reply to Schulman’s message.

    Sam Altman, OpenAI’s co-founder and CEO, said in a post of his own that Schulman’s perspective informed the startup’s early strategy.

    Schulman and others chose to leave after the board pushed out Altman as chief last November. Employees protested the decision, prompting Sutskever and two other board members, Tasha McCauley and Helen Toner, to resign. Altman was reinstated and OpenAI took on additional board members.

    Toner said on a podcast that Altman had given the board incorrect information about the “small number of formal safety processes that the company did have in place.”

    The law firm WilmerHale found in an independent review that the board wasn’t concerned about product safety when it pushed out Altman.

    Last week, Altman said on X that OpenAI “has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.” Altman said OpenAI is still committed to keeping 20% of its computing resources for safety initiatives.

    Also on Monday, Greg Brockman, another co-founder of OpenAI and its president, announced that he was taking a sabbatical for the rest of the year.

    WATCH: OpenAI announces a search engine called SearchGPT

    ]]>
    Mon, Aug 05 2024 07:33:14 PM Mon, Aug 05 2024 09:00:11 PM
    Google pulls AI ad for Olympics following backlash https://www.nbclosangeles.com/news/business/money-report/google-pulls-ai-ad-for-olympics-following-backlash/3477046/ 3477046 post 9759641 https://media.nbclosangeles.com/2024/08/108015767-1722620497712-Screenshot_2024-08-02_at_104117_AM-1.jpg?quality=85&strip=all&fit=300,176
  • Google has pulled an ad for its AI chatbot Gemini following backlash online.
  • The ad depicted a little girl and her dad using the tool to write a fan letter to an Olympic athlete.
  • The company told CNBC that it phased out the commercial due to feedback.
  • Google has pulled an Olympics ad for its chatbot Gemini from airwaves following backlash for the way it depicts a little girl using artificial intelligence to write a fan letter.

    The ad, titled “Dear Sydney,” showed a girl’s dad prompting the AI chatbot to help write a letter to her favorite athlete, U.S. hurdler and sprinter Sydney McLaughlin-Levrone. Google launched Gemini, formerly known as Bard, last year following the surge in popularity of OpenAI’s ChatGPT.

    “Gemini, help my daughter write a letter telling Sydney how inspiring she is,” the father said in the ad, prompting Gemini. The commercial then briefly shows the draft Gemini produced and closes with footage of the little girl running on the track with a text overlay that says, “A little help from Gemini.”

    The ad is still viewable on YouTube but has been taken off the airwaves, where it was repeatedly shown in the first week of the Games.

    A Google spokesperson said in a statement to CNBC that, “While the ad tested well before airing, given the feedback, we have decided to phase the ad out of our Olympics rotation.”

    Google said it still sees the Gemini app as helping to provide a “starting point” for writing ideas.

    “We believe that AI can be a great tool for enhancing human creativity, but can never replace it,” the statement said. “Our goal was to create an authentic story celebrating Team USA.”

    Google previously defended the ad. However, backlash continued to gain steam as people accused the company of encouraging the use of automation instead of authenticity, particularly with children.

    “I flatly reject the future that Google is advertising,” Shelly Palmer, professor of advanced media at Syracuse University’s S.I. Newhouse School of Public Communications, wrote in a widely circulated blog post. The technology presents a “monocultural future where we see fewer and fewer examples of original human thoughts,” she wrote.

    Google is not the only company facing criticism for ads that promote replacing creative tasks with AI.

    In a recent commercial, Apple showed a hydraulic press machine crushing music instruments and paint cans to reveal its new iPad Pro. The company ended up apologizing and pulled the ad from television.

    Despite training AI models on original creative work, OpenAI technology chief Mira Murati said recently that AI will cause some creative jobs to go away, but that some of them should not have existed in the first place. Hollywood actors and unions vocally pushed back after Scarlett Johansson said OpenAI ripped off her voice for the new ChatGPT AI voice named “Sky.”

    Disclosure: CNBC parent NBCUniversal owns NBC Sports and NBC Olympics. NBC Olympics is the U.S. broadcast rights holder to all Summer and Winter Games through 2032.

    WATCH: Long-term AI will ‘eat a lot of software’

    ]]>
    Fri, Aug 02 2024 11:58:51 AM Fri, Aug 02 2024 03:01:06 PM
    Apple is spending more on AI, but remains far behind its Silicon Valley peers https://www.nbclosangeles.com/news/business/money-report/apple-is-spending-more-on-ai-but-remains-far-behind-its-silicon-valley-peers/3476364/ 3476364 post 9605397 Source: Apple https://media.nbclosangeles.com/2024/06/107426594-1718039043289-Screenshot_2024-06-10_at_10242_PM-2.jpg?quality=85&strip=all&fit=300,176
  • Apple’s capital expenditures are far below its mega-cap peers and are growing at a much slower rate.
  • Artificial intelligence is increasingly important at Apple, but the company takes a very different approach relative to Microsoft, Apple, Google and Meta.
  • “Embedded in our results this quarter is an increase year over year in the amount we’re spending for AI and Apple Intelligence,” CEO Tim Cook told CNBC’s Steve Kovach on Thursday.
  • The topic of greatest interest to analysts on Apple’s quarterly earnings call on Thursday was a product that’s not even available to the general public yet.

    Apple Intelligence, the company’s forthcoming artificial intelligence system, could spur a fresh cycle of iPhone upgrades and hardware sales. But CEO Tim Cook and CFO Luca Maestri spent a good part of the Q&A portion of the analyst call dodging questions about the pace of Apple’s rollout, whether the company is already seeing a sales boost from the service, and Apple’s deal with OpenAI to integrate ChatGPT into its software.

    One question Cook was willing to partially address was about the company’s spending on AI servers. It’s an issue that’s come up throughout tech earnings season, as investors try to gauge where companies are in their AI infrastructure buildouts and how much more is coming.

    Cook acknowledged on the call that costs are on the rise. He gave similar comments to CNBC.

    “Embedded in our results this quarter is an increase year over year in the amount we’re spending for AI and Apple Intelligence,” Cook told CNBC’s Steve Kovach on Thursday.

    Apple reported $2.15 billion in payments for property, plant and equipment in the June quarter, up 8% quarter-over-quarter and about 3% from a year earlier. Some of those capital investments aren’t for AI, but for other Apple operations.

    The rise in Apple’s capital expenditure is tiny compared to its mega-cap peers, such as Microsoft, Google, and Meta. Those companies are spending huge sums to build and equip AI-focused data centers with Nvidia chips.

    For example, in the June quarter, Microsoft reported $13.87 billion in capital expenditures, according to FactSet, which is a 55% year-over-year increase. Alphabet’s expenses jumped 91% to $13.19 billion, while Meta’s capital expenditures rose 31% to spent $8.3 billion during the quarter.

    Meta CEO Mark Zuckerberg has explained this spending surge in game theory terms. He said the risk of missing out on the generative AI boom is larger than the downside of spending too much on graphics processors and servers. Zuckerberg also wants to ensure that Apple won’t fully control the next major technology shift, if it turns out to be AI.

    “I actually think all the companies that are investing are making a rational decision,” Zuckerberg said on a Bloomberg podcast last week. “Because the downside of being behind is that you’re out of position for like the most important technology for the next 10 to 15 years.”

    Apple is playing a different game.

    Unlike Amazon, Google and Microsoft, Apple doesn’t have a cloud business that involves renting out infrastructure to other companies. Meta isn’t in that business either, but the company is investing in training its own open-source large language model, and in using AI to power its massive recommendation engine.

    Apple revealed this week in a technical paper that it rented cheaper Google TPUs in relatively small quantities, not Nvidia chips, to train its Apple Intelligence models. On Monday, the company released the first version of Apple Intelligence, its suite of AI features that will improve Siri, automatically generate emails and images and sort notifications. But it’s currently only available for developers to test.

    As it builds out its infrastructure, Apple has the advantage of having designed its own chips, both for its phones and servers, so the company doesn’t have to spend billions of dollars on third-party processors.

    Apple has a “hybrid” approach to data centers that pushes some of its capital expenditures onto its partners, and turns them into operating expenses for Apple.

    “On the CapEx part, it’s important to remember that we employ a hybrid kind of approach where we do things internally and we have certain partners that we do business with externally where the CapEx would appear in their respective businesses,” Cook said on the call with analysts.

    One of those partners is OpenAI, whose ChatGPT technology will be integrated into iOS later this year. OpenAI rents Nvidia GPUs from Microsoft, its primary investor. Apple also rents cloud capacity from providers including Amazon, Google, and Microsoft.

    Apple declined to talk about the details of the OpenAI agreement on Thursday, describing them as confidential. But Cook left open the possibility that there could be monetization opportunities.

    Apple’s quarterly results topped estimates on Thursday, with sales rising 5% to $85.8 billion. The stock ticked up less than 1% in extended trading.

    WATCH: Still questions around how Amazon will take advantage of AI

    ]]>
    Thu, Aug 01 2024 06:02:34 PM Thu, Aug 01 2024 06:48:15 PM
    Intel's AI platforms helped optimize logistics for the Olympic Games. What can they do for your enterprise? https://www.nbclosangeles.com/news/local/intels-ai-platforms-helped-optimize-logistics-for-the-olympic-games-what-can-they-do-for-your-enterprise/3460991/ 3460991 post 9697058 Michael Heim https://media.nbclosangeles.com/2024/07/Intel_Article01_AdobeStock_626542468_CROP.jpg?quality=85&strip=all&fit=300,169

    The following content is created in partnership with Intel. It does not reflect the work or opinions of the NBC Los Angeles editorial staff. Click here to learn more about Intel.

    The Olympic and Paralympic Games Paris 2024 are big events—dozens of venues, 10,500 athletes, 20,000 journalists and 45,000 volunteers. The enormous task of planning begins long before the doors open and any people set foot inside. And with the International Olympic Committee committed to making Paris 2024 the most sustainable Olympic Games ever—with half the carbon emissions of prior summer editions of the Olympic Games—it’s clear that the need to optimize logistics demands a new kind of solution. A solution that Intel technologies are poised to deliver.

    To achieve a broad set of goals—dependable, flexible planning for the Olympic venues, advance placement of everything from broadcast cameras to retail kiosks, logistical support that allows for the planning of everything from supply routes and athlete services to fan concessions—Intel’s AI platform digital twinning technology has been deployed, backed up withpeople counting enabled by Intel’s AI-platform, to deliver near real-time metrics and build on the activation for future Olympic Games—and beyond.

    Intel’s AI platform digital twinning: planning with virtual replicas

    Digital twins are virtual representations of objects, environments or systems, based on real-world data. Given enough data, a digital twin can be used to simulate complex interactions and virtually explore ideas that would be too expensive or resource-intensive to test in reality, helping users make decisions and manage optimizations with confidence, right from the desktop.

    In preparation for Paris 2024, Intel and its partners created virtual replicas, enabled by Intel® Xeon® processors, of most of the competition venues, in some cases even before construction of the venues was finalized. This enabled planners to make critical decisions about camera placements, event setup, transit links, security, crowd management and more—with the freedom to experiment, plenty of time to adjust to changing event and venue needs, and without the need to have staff on site, which cuts travel costs.

    Jean-Fauste Mukumbi, Solutions Development Manager at Intel Corporation, Olympics Program Office, explains how the virtual venues were created for Paris 2024: “We start with a blueprint in digital format provided by the Olympic organizing committee, and from those files—using powerful workstations with 4th Gen Intel® Xeon® processors and Intel® Arc™ A770 GPUs—we’re able to create 3D models and then load them into our software partner’s platform. Once those models are uploaded to that platform, which runs similarly to a 3D gaming engine, that content can be streamed to client devices all over the world.”

    During the planning process for the Olympic Games, the venue twins helped make it possible for all stakeholders to view and immediately work with changes to the plan, as updates to the 3D models were shared across the platform.

    “We’re really getting smarter about how you’re moving people around, how you’re thinking about concessions, how you’re thinking about signage, how you’re thinking about broadcast cameras and using those digital twins to make decisions and scenario plan right around those things,” says Sarah Vickers, head of Intel’s Olympic and Paralympic Games Program, “whereas traditionally that might have had to be done in person through 2D drawings.”

    We are using the Olympics as a gateway to solve some of the most complex challenges in the technical world. Intel is one of the only companies on the planet that can do it end-to-end because we have data center to client to edge and everything in between to define the future of the broadcast industry.

    Ravindra Velhal, Intel

    Making it count: understanding customer needs on-site

    Planning is one thing, but once the Olympic Games are on it takes careful measurement to understand its effectiveness, and Paris 2024 will also showcase the way Intel® AI platforms can be used to understand customer satisfaction. While the digital twinning platform used to plan Paris 2024 won’t be updated with realtime data during the events themselves, Intel AI platforms will continue to deliver critical customer insights onsite, with a people-counting system installed at venue media centers and Olympic family lounges within all of the Olympic sites in the Paris area, Lille, and at Chateauroux.

    According to Mukumbi, the system uses “stereoscopic sensors along with machine learning to count the number of people coming in and out of the different venues.” This allows for optimization of resources like food and beverage supplies, security, transportation needs and more, based on realtime data about venue occupancy. As installed, the system “monitors the speed and height of an object that’s moving and based on those two elements it can decide whether it is a person or, for instance, a pram.”

    As with many venues, the Olympic Games have kept track of people counts in the past, but this was formerly a manual process, not only cumbersome and less accurate, but more significantly left no record for the future to drive further optimizations. The IOC, Mukumbi tells us, was “very interested in implementing this project because it can provide historical data that can help plan future Olympic Games.” Plus, the new system is 95% accurate, making the data far more reliable for future use.

    So what will journalists at the media centers and users of the family lounges notice? According to Mukumbi, “attendees will certainly notice the level of service that will be provided to them—there will be better resource allocation on site to help support them. We expect there will be a greater level of satisfaction for the people hosted by these venues.”

    Virtual spaces and live data: the future of optimization

    For Paris 2024, the digital twin platform will only be used for planning; there’s no on-site  application currently in development for this year’s Olympic Games. But there is huge potential for future integrations, says Mukumbi: “Real-time data can be integrated into the digital twin platform and one of the interesting use cases for the future could be to have those functionalities merged into a single application.” In future Olympics, or other sporting events, concerts, conventions, transit centers or other busy spaces, these solutions can be deployed across multiple sites, and in the future live data integration could enable optimizations and efficiencies that go well beyond what’s possible in Paris this summer.

    “The technology is ready to be deployed on that scale,” says Mukumbi. “The possibilities really come down only to the type of data you want to collect on site.”

    “Think about queue times,” says Vickers. “You can use data to adjust your resources on site so that people experience shorter queues at concessions, so they’re optimizing their enjoyment, you’ve got happier customers, they’re spending more money, and you’ve got a more efficient event. That’s a win-win.”


    More from this series:

    ]]>
    Wed, Jul 31 2024 09:00:00 AM Tue, Jul 30 2024 01:45:20 PM
    Regulators consider first federal rule on AI-created political ads https://www.nbclosangeles.com/news/national-international/regulators-consider-first-federal-rule-on-ai-created-political-ads/3472685/ 3472685 post 9599535 AP Photo/Seth Wenig, File https://media.nbclosangeles.com/2024/06/AP24158647331275.jpg?quality=85&strip=all&fit=300,200 Amid a campaign tinged by concerns about so-called deepfakes, the Federal Communications Commission is proposing a first-of-its-kind rule to mandate disclosure of artificial intelligence-generated content in political ads, though it may not go into force before the election.

    Regulators have been slow to grapple with the new technology, which allows people to use cheap and readily available AI tools to impersonate others. FCC Chair Jessica Rosenworcel says disclosure is a critical — and, perhaps just as important, doable — first step in regulating artificially created content.

    “We spent the better part of the last year in Washington hand-wringing about artificial intelligence,” Rosenworcel said in an interview. “Let’s do something more than hand-wringing and pearl clutch.”

    The new rule would require TV and radio ads to disclose whether they include AI-generated content, sidestepping, for now, the debate about whether that content should be banned outright. Existing laws prevent outright deception in TV ads.

    “We don’t want to be in a position to render judgment; we simply want to disclose it so people can make their own decisions,” Rosenworcel said. 

    The move was inspired in part by the first-known deepfake in American national politics, a robocall impersonating President Joe Biden that told voters not to turn out in January’s New Hampshire primary. 

    “We kicked into high gear because we want to set an example,” Rosenworcel said of the swift official response to the New Hampshire deepfake. 

    The political consultant behind the deepfake robocall, who was outed by NBC News, faces a $6 million fine from the FCC and 26 criminal counts in New Hampshire courts. The U.S. Justice Department on Monday threw its weight behind a private lawsuit brought by the League of Women Voters. 

    The consultant, Steve Kramer, claimed he made the ad only to highlight the danger of AI and spur action.

    Some political ads have already started using artificially generated content in both potentially deceptive and nondeceptive ways, and the generic AI content is becoming more common in nonpolitical consumer ads simply because it can be cheaper to produce.

    Some social media companies have banned AI-created political ads. Congress has considered several bills. And about 20 states have adopted their own laws regulating artificial political content, according to the nonprofit group Public Citizen, which tracks the efforts.

    But advocates say national policy is necessary to create a uniform framework. 

    The social media platform X not only has not banned videos created with AI, but its billionaire owner, Elon Musk, has been one of their promoters. Over the weekend, he shared with his 192 million followers a doctored video made to look like a campaign ad for Vice President Kamala Harris.

    The government does not regulate social media content, but the FCC has a long history of regulating political programing on TV and the radio, including maintaining a database of political ad spending, with information that TV and radio stations are mandated to collect from ad buyers. The new rule would simply have broadcasters also ask ad-buyers whether their spots were made with AI.

    The Federal Elections Commission, meanwhile, has been considering its own AI disclosure rules. The Republican chairman of the FEC wrote to Rosenworcel asking the FCC to stand down, arguing his is the rightful regulator of campaign ads.

    Rosenworcel brushed past the interagency squabbling, noting both agencies — along with the IRS and others — have played complementary roles in regulating political groups and spending for decades. The FCC also regulates a wider variety of ads than the FEC, including so-called issue ads run by nonprofit groups that do not expressly call for the defeat of a candidate. 

    And advocates note the FEC has a difficult time doing much of anything because it is, by design, split evenly between Republicans and Democrats, making consensus rare.

    “We’re barreling towards elections which may be distorted, or even decided, by political deepfakes. Yet this is an entirely avoidable dystopia if regulators simply demand disclosures when AI is used,” said Robert Weissman, a co-president of Public Citizen, who said he hopes the FCC rule will be finalized and implemented “as soon as possible.”

    Still, while Rosenworcel said the FCC is moving as quickly as possible, federal rulemaking is a deliberate process that requires clearing numerous hurdles, as well as time for public input.

    “There will be complicated questions down the road,” she said. “Now is the right time to start this conversation.”

    This story first appeared on NBCNews.com.  More from NBC News:

    ]]>
    Mon, Jul 29 2024 10:42:56 PM Mon, Jul 29 2024 10:44:07 PM
    Generative AI requires massive amounts of power and water, and the aging U.S. grid can't handle the load https://www.nbclosangeles.com/news/business/money-report/generative-ai-requires-massive-amounts-of-power-and-water-and-the-aging-u-s-grid-cant-handle-the-load/3471089/ 3471089 post 9734560 Andrew Evers https://media.nbclosangeles.com/2024/07/108011452-1722027602055-Thumbnail_9.jpg?quality=85&strip=all&fit=300,176 Thanks to the artificial intelligence boom, new data centers are springing up as quickly as companies can build them. This has translated into huge demand for power to run and cool the servers inside. Now concerns are mounting about whether the U.S. can generate enough electricity for the widespread adoption of AI, and whether our aging grid will be able to handle the load.

    “If we don’t start thinking about this power problem differently now, we’re never going to see this dream we have,” said Dipti Vachani, head of automotive at Arm. The chip company’s low-power processors have become increasingly popular with hyperscalers like Google, Microsoft , Oracle and Amazon — precisely because they can reduce power use by up to 15% in data centers.

    Nvidia‘s latest AI chip, Grace Blackwell, incorporates Arm-based CPUs it says can run generative AI models on 25 times less power than the previous generation.

    “Saving every last bit of power is going to be a fundamentally different design than when you’re trying to maximize the performance,” Vachani said.

    This strategy of reducing power use by improving compute efficiency, often referred to as “more work per watt,” is one answer to the AI ​​energy crisis. But it’s not nearly enough.

    One ChatGPT query uses nearly 10 times as much energy as a typical Google search, according to a report by Goldman Sachs. Generating an AI image can use as much power as charging your smartphone

    This problem isn’t new. Estimates in 2019 found training one large language model produced as much CO2 as the entire lifetime of five gas-powered cars

    The hyperscalers building data centers to accommodate this massive power draw are also seeing emissions soar. Google’s latest environmental report showed greenhouse gas emissions rose nearly 50% from 2019 to 2023 in part because of data center energy consumption, although it also said its data centers are 1.8 times as energy efficient as a typical data center. Microsoft’s emissions rose nearly 30% from 2020 to 2024, also due in part to data centers. 

    And in Kansas City, where Meta is building an AI-focused data center, power needs are so high that plans to close a coal-fired power plant are being put on hold.

    Hundreds of ethernet cables connect server racks at a Vantage data center in Santa Clara, California, on July 8, 2024.
    Katie Tarasov
    Hundreds of ethernet cables connect server racks at a Vantage data center in Santa Clara, California, on July 8, 2024.

    Chasing power

    There are more than 8,000 data centers globally, with the highest concentration in the U.S. And, thanks to AI, there will be far more by the end of the decade. Boston Consulting Group estimates demand for data centers will rise 15%-20% every year through 2030, when they’re expected to comprise 16% of total U.S. power consumption. That’s up from just 2.5% before OpenAI’s ChatGPT was released in 2022, and it’s equivalent to the power used by about two-thirds of the total homes in the U.S.

    CNBC visited a data center in Silicon Valley to find out how the industry can handle this rapid growth, and where it will find enough power to make it possible.

    “We suspect that the amount of demand that we’ll see from AI-specific applications will be as much or more than we’ve seen historically from cloud computing,” said Jeff Tench, Vantage Data Center’s executive vice president of North America and APAC.

    Many big tech companies contract with firms like Vantage to house their servers. Tench said Vantage’s data centers typically have the capacity to use upward of 64 megawatts of power, or as much power as tens of thousands of homes.

    “Many of those are being taken up by single customers, where they’ll have the entirety of the space leased to them. And as we think about AI applications, those numbers can grow quite significantly beyond that into hundreds of megawatts,” Tench said .

    Santa Clara, California, where CNBC visited Vantage, has long been one of the nation’s hot spots for clusters of data centers near data-hungry clients. Nvidia’s headquarters was visible from the roof. Tench said there’s a “slowdown” in Northern California due to a “lack of availability of power from the utilities here in this area.”

    Vantage is building new campuses in Ohio, Texas and Georgia.

    “The industry itself is looking for places where there is either proximate access to renewables, either wind or solar, and other infrastructure that can be leveraged, whether it be part of an incentive program to convert what would have been a coal-fired plant into natural gas, or increasingly looking at ways in which to offtake power from nuclear facilities,” Tench said.

    Vantage Data Centers is expanding a campus outside Phoenix, Arizona, to offer 176 megawatts of capacity
    Vantage Data Centers
    Vantage Data Centers is expanding a campus outside Phoenix, Arizona, to offer 176 megawatts of capacity

    Some AI companies and data centers are experimenting with ways to generate electricity on site.

    OpenAI CEO Sam Altman has been vocal about this need. He recently invested in a solar startup that makes shipping-container-sized modules that have panels and power storage. Altman has also invested in nuclear fission startup Oklo which aims to make mini nuclear reactors housed in A-frame structures, and in the nuclear fusion startup Helion. 

    Microsoft signed a deal with Helion last year to start buying its fusion electricity in 2028. Google partnered with a geothermal startup that says its next plant will harness enough power from underground to run a large data center. Vantage recently built a 100-megawatt natural gas plant that powers one of its data centers in Virginia, keeping it entirely off the grid.

    Hardening the grid

    The aging grid is often ill-equipped to handle the load even where enough power can be generated. The bottleneck occurs in getting power from the generation site to where it’s consumed. One solution is to add hundreds or thousands of miles of transmission lines. 

    “That’s very costly and very time-consuming, and sometimes the cost is just passed down to residents in a utility bill increase,” said Shaolei Ren, associate professor of electrical and computer engineering at the University of California, Riverside.

    One $5.2 billion effort to expand lines to an area of ​​​​Virginia known as “data center alley” was met with opposition from local ratepayers who don’t want to see their bills increase to fund the project.

    Another solution is to use predictive software to reduce failures at one of the grid’s weakest points: the transformer.

    “All electricity generated must go through a transformer,” said VIE Technologies CEO Rahul Chaturvedi, adding that there are 60 million-80 million of them in the U.S.

    The average transformer is also 38 years old, so they’re a common cause for power outages. Replacing them is expensive and slow. VIE makes a small sensor that attaches to transformers to predict failures and determine which ones can handle more load so it can be shifted away from those at risk of failure. 

    Chaturvedi said business has tripled since ChatGPT was released in 2022, and is poised to double or triple again next year.

    VIE Technologies CEO Rahul Chaturvedi holds up a sensor on June 25, 2024, in San Diego. VIE installs these on aging transformers to help predict and reduce grid failures.
    VIE Technologies
    VIE Technologies CEO Rahul Chaturvedi holds up a sensor on June 25, 2024, in San Diego. VIE installs these on aging transformers to help predict and reduce grid failures.

    Cooling servers down

    Generative AI data centers will also require 4.2 billion to 6.6 billion cubic meters of water withdrawal by 2027 to stay cool, according to Ren’s research. That’s more than the total annual water withdrawal of half of the U.K.

    “Everybody is worried about AI being energy intensive. We can solve that when we get off our ass and stop being such idiots about nuclear, right? That’s solvable. Water is the fundamental limiting factor to what is coming in terms of AI,” said Tom Ferguson, managing partner at Burnt Island Ventures.

    Ren’s research team found that every 10-50 ChatGPT prompts can burn through about what you’d find in a standard 16-ounce water bottle

    Much of that water is used for evaporative cooling, but Vantage’s Santa Clara data center has large air conditioning units that cool the building without any water withdrawal.

    Another solution is using liquid for direct-to-chip cooling.

    “For a lot of data centers, that requires an enormous amount of retrofit. In our case at Vantage, about six years ago, we deployed a design that would allow for us to tap into that cold water loop here on the data hall floor,” Vantage’s Tench said.

    Companies like Apple, Samsung and Qualcomm have touted the benefits of on-device AI, keeping power-hungry queries off the cloud, and out of power-strapped data centers.

    “We’ll have as much AI as those data centers will support. And it may be less than what people aspire to. But ultimately, there’s a lot of people working on finding ways to un-throttle some of those supply constraints,” Tench said.

    ]]>
    Sun, Jul 28 2024 06:00:01 AM Sun, Jul 28 2024 05:34:04 PM
    A neurological disorder stole her voice. Jennifer Wexton takes it back on the House floor. https://www.nbclosangeles.com/news/national-international/neurological-disorder-voice-jennifer-wexton-house-ai/3468996/ 3468996 post 9724787 Photo by Paul Morigi/Getty Images for Holiday Road https://media.nbclosangeles.com/2024/07/GettyImages-1355765880.jpg?quality=85&strip=all&fit=300,200 California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

    The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

    The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

    “We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

    Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

    Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

    “It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

    California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March.

    Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

    The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

    The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

    ]]>
    Thu, Jul 25 2024 11:48:08 AM Thu, Jul 25 2024 11:49:16 AM
    How AI and automation will reshape grocery stores and fast-food chains https://www.nbclosangeles.com/news/business/money-report/how-ai-and-automation-will-reshape-grocery-stores-and-fast-food-chains/3460004/ 3460004 post 9694408 Aaron P /Bauer-Griffin | Getty Images https://media.nbclosangeles.com/2024/07/107296962-1694088581955-gettyimages-1650118676-230906b6_taco_bell_defy_b-gr_11.jpeg?quality=85&strip=all&fit=300,176 AI isn’t just a hyped innovation in the tech sector; the food industry is also investing heavily in the red-hot trend.

    Americans heading to the grocery store or their favorite fast-food restaurant will already have noticed the introduction of the new technology in such services as self-checkout kiosks and even AI ordering in drive-thru lanes.

    While U.S. consumers facing continued food inflation hunt for deals and shift their spending habits accordingly, the food industry is working to stay competitive by investing in artificial intelligence to help curb high labor operating costs and reduce prices on some items.

    For example, fast-food chains like McDonald’s, Taco Bell and Wendy’s have reintroduced value menus. And big-box retailers Walmart and Target have lowered the price of certain grocery goods.

    “It’s very difficult in this environment to engineer great profits, great sales and to keep customers satisfied,” said Neil Saunders, GlobalData’s managing director and retail analyst. “It’s a very difficult equation to balance. And I think until the economy is on a different footing, it’s not going to be balanced completely. That’s the reality of it.”

    Amid this tough economic backdrop, McDonald’s announced its plan this year to spend $2 billion into employing AI and robots into restaurant and drive-thrus. And in 2022, grocery stores spent $13 billion on tech automations, according to research by FMI, The Food Industry Association. FMI expects spending on innovations like smart carts and revamped self-checkout aisles to soar 400% through 2025.

    “We see a lot of upside over the next several years, with AI and technology being able to enhance customer experience while making the team members’ jobs a lot easier, ” said Joe Park, Yum Brands’ chief digital and technology officer.

    Watch the video to find out more about how the food industry is using AI to reshape the customer experience.

    ]]>
    Mon, Jul 15 2024 11:58:09 AM Mon, Jul 15 2024 10:46:13 PM
    Two 80-something journalists tried ChatGPT. Then, they sued to protect the ‘written word' https://www.nbclosangeles.com/news/national-international/two-journalists-sue-to-protect-written-word-from-chatgpt/3456958/ 3456958 post 9684699 AP Photo/Charles Krupa https://media.nbclosangeles.com/2024/07/AP24150646801750.jpg?quality=85&strip=all&fit=300,200 When two octogenarian buddies named Nick discovered that ChatGPT might be stealing and repurposing a lifetime of their work, they tapped a son-in-law to sue the companies behind the artificial intelligence chatbot.

    Veteran journalists Nicholas Gage, 84, and Nicholas Basbanes, 81, who live near each other in the same Massachusetts town, each devoted decades to reporting, writing and book authorship.

    Gage poured his tragic family story and search for the truth about his mother’s death into a bestselling memoir that led John Malkovich to play him in the 1985 film “Eleni.” Basbanes transitioned his skills as a daily newspaper reporter into writing widely-read books about literary culture.

    Basbanes was the first of the duo to try fiddling with AI chatbots, finding them impressive but prone to falsehoods and lack of attribution. The friends commiserated and filed their lawsuit earlier this year, seeking to represent a class of writers whose copyrighted work they allege “has been systematically pilfered by” OpenAI and its business partner Microsoft.

    “It’s highway robbery,” Gage said in an interview in his office next to the 18th-century farmhouse where he lives in central Massachusetts.

    “It is,” added Basbanes, as the two men perused Gage’s book-filled shelves. “We worked too hard on these tomes.”

    Now their lawsuit is subsumed into a broader case seeking class-action status led by household names like John Grisham, Jodi Picoult and “Game of Thrones” novelist George R. R. Martin; and proceeding under the same New York federal judge who’s hearing similar copyright claims from media outlets such as The New York Times, Chicago Tribune and Mother Jones.

    What links all the cases is the claim that OpenAI — with help from Microsoft’s money and computing power — ingested huge troves of human writings to “train” AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works.

    “If they can get it for nothing, why pay for it?” Gage said. “But it’s grossly unfair and very harmful to the written word.”

    OpenAI and Microsoft didn’t return requests for comment this week but have been fighting the allegations in court and in public. So have other AI companies confronting legal challenges not just from writers but visual artists, music labels and other creators who allege that generative AI profits have been built on misappropriation.

    The chief executive of Microsoft’s AI division, Mustafa Suleyman, defended AI industry practices at last month’s Aspen Ideas Festival, voicing the theory that training AI systems on content that’s already on the open internet is protected by the “fair use” doctrine of U.S. copyright laws.

    “The social contract of that content since the ’90s has been that it is fair use,” Suleyman said. “Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like.”

    Suleyman said it was more of a “gray area” in situations where some news organizations and others explicitly said they didn’t want tech companies “scraping” content off their websites. “I think that’s going to work its way through the courts,” he said.

    The cases are still in the discovery stage and scheduled to drag into 2025. In the meantime, some who believe their professions are threatened by AI business practices have tried to secure private deals to get technology companies to pay a fee to license their archives. Others are fighting back.

    “Somebody had to go out and interview real people in the real world and conduct real research by poring over documents and then synthesizing those documents and coming up with a way to render them in clear and simple prose,” said Frank Pine, executive editor of MediaNews Group, publisher of dozens of newspapers including the Denver Post, Orange County Register and St. Paul Pioneer Press. The newspaper chain sued OpenAI in April.

    “All of that is real work, and it’s work that AI cannot do,” Pine said. “An AI app is never going to leave the office and go downtown where there’s a fire and cover that fire.”

    Deemed too similar to lawsuits filed late last year, the Massachusetts duo’s January complaint has been folded into a consolidated case brought by other nonfiction writers as well as fiction writers represented by the Authors Guild. That means Gage and Basbanes won’t likely be witnesses in any upcoming trial in Manhattan’s federal court. But in the twilight of their careers, they thought it important to take a stand for the future of their craft.

    Gage fled Greece as a 9-year-old, haunted by his mother’s 1948 killing by firing squad during the country’s civil war. He joined his father in Worcester, Massachusetts, not far from where he lives today. And with a teacher’s nudge, he pursued writing and built a reputation as a determined investigative reporter digging into organized crime and political corruption for The New York Times and other newspapers.

    Basbanes, as a Greek American journalist, had heard of and admired the elder “hotshot reporter” when he got a surprise telephone call at his desk at Worcester’s Evening Gazette in the early 1970s. The voice asked for Mr. Basbanes, using the Greek way of pronouncing the name.

    “You were like a talent scout,” Basbanes said. “We established a friendship. I mean, I’ve known him longer than I know my wife, and we’ve been married 49 years.”

    Basbanes hasn’t mined his own story like Gage has, but he says it can sometimes take days to craft a great paragraph and confirm all of the facts in it. It took him years of research and travel to archives and auction houses to write his 1995 book “A Gentle Madness” about the art of book collection from ancient Egypt through modern times.

    “I love that ‘A Gentle Madness’ is in 1,400 libraries or so,” Basbanes said. “This is what a writer strives for — to be read. But you also write to earn, to put food on the table, to support your family, to make a living. And as long as that’s your intellectual property, you deserve to be compensated fairly for your efforts.”

    Gage took a great professional risk when he quit his job at the Times and went into $160,000 debt to find out who was responsible for his mother’s death.

    “I tracked down everyone who was in the village when my mother was killed,” he said. “And they had been scattered all over Eastern Europe. So it cost a lot of money and a lot of time. I had no assurance that I would get that money back. But when you commit yourself to something as important as my mother’s story was, the risks are tremendous, the effort is tremendous.”

    In other words, ChatGPT couldn’t do that. But what worries Gage is that ChatGPT could make it harder for others to do that.

    “Publications are going to die. Newspapers are going to die. Young people with talent are not going to go into writing,” Gage said. “I’m 84 years old. I don’t know if this is going to be settled while I’m still around. But it’s important that a solution be found.”

    ]]>
    Thu, Jul 11 2024 02:11:11 PM Thu, Jul 11 2024 02:18:07 PM
    Pope Francis becomes first pontiff to address a G7 summit, raises alarm about AI https://www.nbclosangeles.com/news/national-international/pope-francis-will-be-the-first-pontiff-to-address-a-g7-summit-hes-raising-the-alarm-about-ai/3436699/ 3436699 post 9618553 Christopher Furlong/Pool Photo via AP https://media.nbclosangeles.com/2024/06/AP24166489733291.jpg?quality=85&strip=all&fit=300,200 California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

    The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

    The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

    “We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

    Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

    Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

    “It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

    California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March.

    Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

    The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

    The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

    ]]>
    Fri, Jun 14 2024 02:12:09 AM Fri, Jun 14 2024 11:29:04 AM
    Megan Thee Stallion calls out ‘fake' sexually explicit video circulating on X https://www.nbclosangeles.com/entertainment/entertainment-news/megan-thee-stallion-calls-out-fake-sexually-explicit-video-circulating-on-x/3433327/ 3433327 post 9606690 Photo by Taylor Hill/Getty Images for Boston Calling https://media.nbclosangeles.com/2024/06/GettyImages-2154801585.jpg?quality=85&strip=all&fit=300,200 Rapper Megan Thee Stallion is the latest female celebrity to speak out after being targeted with a sexually explicit deepfake video that circulated on X over the weekend. 

    “It’s really sick how yall go out of the way to hurt me when you see me winning,” Megan Thee Stallion posted on X on Saturday.

    The artist, whose real name is Megan Pete, appeared to be alluding to the video circulating online. She wrote it was “fake,” adding, “Just know today was your last day playing with me and I mean it.”

    Deepfakes refer to digital media that are generated or altered using artificial intelligence or other visual or audio manipulation tools. Some of the most prominent examples “face-swap” individuals, overwhelmingly women and girls, into pornographic or sexually suggestive material. Both public and private figures have been targeted, and deepfake creators can even profit from selling the material online.

    A spokesperson for Roc Nation, which represents Megan Thee Stallion, declined to comment.

    NBC News viewed 18 posts on X that contained the fake video of Megan Thee Stallion, including one that juxtaposed it with the original video that had been used to create the deepfake. Six of the posts had more than 30,000 views each. 

    By Monday afternoon, after NBC News had reached out for comment, it appeared that some of the posts had been removed from the platform.

    A spokesperson for X said the platform’s “rules prohibit the sharing of non-consensual intimate media and we are proactively removing this content.”

    The Elon Musk-owned platform, formerly known as Twitter, has previously been used to spread AI-generated deepfakes of celebrity women, most notably when a series of fake nude and sexually suggestive images of Taylor Swift went viral on the platform. The New York Times reported that one post was viewed 47 million times before it was taken down. In response to the incident, the platform paused the ability to search Swift’s name for three days.

    Other celebrities without Swift’s level of fame have struggled to get the platform to take down sexually explicit deepfakes, including Marvel star Xochitl Gomez, who said her team could not get the material removed, even though she was just 17 at the time. Later, X removed some deepfakes of Gomez after NBC News reached out. Multiple TikTok stars have also been targeted with sexually explicit deepfakes on X. 

    Megan Thee Stallion, whose Hot Girl Summer Tour sold out arenas across the U.S., has been a frequent target of online harassment since she was shot in the foot by rapper Tory Lanez in 2020. He was sentenced to 10 years in prison in 2023. One of the same hip-hop news commentators who cast doubt on the shooting incident during the trial said during a livestream that she “drew attention” to the sexually explicit deepfake of the rapper over the weekend.

    Previously, a legal representative for Megan Thee Stallion said they were “exploring all legal options” related to misinformation spread about her by bloggers. 

    This story first appeared on NBCNews.com. More from NBC News:

    ]]>
    Mon, Jun 10 2024 06:17:05 PM Mon, Jun 10 2024 06:17:05 PM
    Colorado the first state to move forward with attempt to regulate AI's hidden role in American life https://www.nbclosangeles.com/news/national-international/colorado-the-first-state-to-move-forward-with-attempt-to-regulate-ais-hidden-role-in-american-life/3419292/ 3419292 post 9560390 AP Photo/Michael Dwyer, File https://media.nbclosangeles.com/2024/05/AP24143640866943.jpg?quality=85&strip=all&fit=300,200 The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide.

    Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday.

    Colorado’s bill and those that faltered in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of wading into a technology few yet understand and governors worried about being the odd-state-out and spooking AI startups.

    Polis signed Colorado’s bill “with reservations,” saying in an statement he was wary of regulations dousing AI innovation. The bill has a two-year runway and can be altered before it becomes law.

    “I encourage (lawmakers) to significantly improve on this before it takes effect,” Polis wrote.

    Colorado’s proposal, along with six sister bills, are complex, but will broadly require companies to assess the risk of discrimination from their AI and inform customers when AI was used to help make a consequential decision for them.

    The bills are separate from more than 400 AI-related bills that have been debated this year. Most are aimed at slices of AI, such as the use of deepfakes in elections or to make pornography.

    The seven bills are more ambitious, applying across major industries and targeting discrimination, one of the technology’s most perverse and complex problems.

    “We actually have no visibility into the algorithms that are used, whether they work or they don’t, or whether we’re discriminated against,” said Rumman Chowdhury, AI envoy for the U.S. Department of State who previously led Twitter’s AI ethics team.

    While anti-discrimination laws are already on the books, those who study AI discrimination say it’s a different beast, which the U.S. is already behind in regulating.

    “The computers are making biased decisions at scale,” said Christine Webber, a civil rights attorney who has worked on class action lawsuits over discrimination including against Boeing and Tyson Foods. Now, Webber is nearing final approval on one of the first-in-the-nation settlements in a class action over AI discrimination.

    “Not, I should say, that the old systems were perfectly free from bias either,” said Webber. But “any one person could only look at so many resumes in the day. So you could only make so many biased decisions in one day and the computer can do it rapidly across large numbers of people.”

    When you apply for a job, an apartment or a home loan, there’s a good chance AI is assessing your application: sending it up the line, assigning it a score or filtering it out. It’s estimated as many as 83% of employers use algorithms to help in hiring, according to the Equal Employment Opportunity Commission.

    AI itself doesn’t know what to look for in a job application, so it’s taught based on past resumes. The historical data that is used to train algorithms can smuggle in bias.

    Amazon, for example, worked on a hiring algorithm that was trained on old resumes: largely male applicants. When assessing new applicants, it downgraded resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical data — the resumes — it had learned from. The project was scuttled.

    Webber’s class action lawsuit alleges that an AI system that scores rental applications disproportionately assigned lower scores to Black or Hispanic applicants. A study found that an AI system built to assess medical needs passed over Black patients for special care.

    Studies and lawsuits have allowed a glimpse under the hood of AI systems, but most algorithms remain veiled. Americans are largely unaware that these tools are being used, polling from Pew Research shows. Companies generally aren’t required to explicitly disclose that an AI was used.

    “Just pulling back the curtain so that we can see who’s really doing the assessing and what tool is being used is a huge, huge first step,” said Webber. “The existing laws don’t work if we can’t get at least some basic information.”

    That’s what Colorado’s bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was killed under opposition from the governor, are largely similar.

    Colorado’s bill will require companies using AI to help make consequential decisions for Americans to annually assess their AI for potential bias; implement an oversight program within the company; tell the state attorney general if discrimination was found; and inform to customers when an AI was used to help make a decision for them, including an option to appeal.

    Labor unions and academics fear that a reliance on companies overseeing themselves means it’ll be hard to proactively address discrimination in an AI system before it’s done damage. Companies are fearful that forced transparency could reveal trade secrets, including in potential litigation, in this hyper-competitive new field.

    AI companies also pushed for, and generally received, a provision that only allows the attorney general, not citizens, to file lawsuits under the new law. Enforcement details have been left up to the attorney general.

    While larger AI companies have more or less been on board with these proposals, a group of smaller Colorado-based AI companies said the requirements might be manageable by behemoth AI companies, but not by budding startups.

    “We are in a brand new era of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and restricts our use of technology while this is forming is just going to be detrimental to innovation.”

    All agreed, along with many AI companies, that what’s formally called “algorithmic discrimination” is critical to tackle. But they said the bill as written falls short of that goal. Instead, they proposed beefing up existing anti-discrimination laws.

    Chowdhury worries that lawsuits are too costly and time consuming to be an effective enforcement tool, and laws should instead go beyond what even Colorado is proposing. Instead, Chowdhury and academics have proposed accredited, independent organization that can explicitly test for potential bias in an AI algorithm.

    “You can understand and deal with a single person who is discriminatory or biased,” said Chowdhury. “What do we do when it’s embedded into the entire institution?”

    ___

    ]]>
    Thu, May 23 2024 02:01:16 AM Thu, May 23 2024 02:01:16 AM
    Here's how Mastercard plans to use AI to find stolen cards quicker https://www.nbclosangeles.com/news/national-international/mastercard-to-use-generative-artificial-intelligence-to-stop-credit-card-fraud/3418931/ 3418931 post 9559248 AP Photo/Mark Lennihan, File https://media.nbclosangeles.com/2024/05/AP24142804106689.jpg?quality=85&strip=all&fit=300,187 Mastercard said Wednesday that it expects to be able to discover that your credit or debit card number has been compromised well before it ends up in the hands of a cybercriminal.

    In its latest software update rolling out this week, Mastercard is integrating artificial intelligence into its fraud-prediction technology that it expects will be able to see patterns in stolen cards faster and allow banks to replace them before they are used by criminals.

    “Generative AI is going to allow to figure out where did you perhaps get your credentials compromised, how do we identify how it possibly happened, and how do we very quickly remedy that situation not only for you, but the other customers who don’t know they are compromised yet,” said Johan Gerber, executive vice president of security and cyber innovation at Mastercard, in an interview.

    Mastercard, which is based in Purchase, New York, says with this new update it can use other patterns or contextual information, such as geography, time and addresses, and combine it with incomplete but compromised credit card numbers that appear in databases to get to the cardholders sooner to replace the bad card.

    The patterns can now also be used in reverse, potentially using batches of bad cards to see potentially compromised merchants or payment processors. The pattern recognition goes beyond what humans could do through database inquiries or other standard methods, Gerber said.

    Billions of stolen credit card and debit card numbers are floating in the dark web, available for purchase by any criminal. Most were stolen from merchants in data breaches over the years, but also a significant number have been stolen from unsuspecting consumers who used their credit or debit cards at the wrong gas station, ATM or online merchant.

    These compromised cards can remain undetected for weeks, months or even years. It is only when the payment networks themselves dive into the dark web to fish for stolen numbers themselves, a merchant learns about a breach, or the card gets used by a criminal do the payments networks and banks figure out a batch of cards might be compromised.

    “We can now actually proactively reach out to the banks to make sure that we service that consumer and get them a new card in her or his hands so they can go about their lives with as little disruption as possible,” Gerber said.

    The payment networks are largely trying to move away from the “static” credit card or debit card numbers — that is a card number and expiration date that is used universally across all merchants — and move to unique numbers for specific transactions. But it may take years for that transition to happen, particularly in the U.S. where payment technology adoption tends to lag.

    While more than 90% of all in-person transactions worldwide are now using chip cards, the figure in the U.S. is closer to 70%, according to EMVCo, the technological organization behind the chip in credit and debit cards.

    Mastercard’s update comes as its major competitor, Visa Inc., also looks for ways to make consumers discard the 16-digit credit and debit card number. Visa last week announced major changes to how credit and debit cards will operate in the U.S., meaning Americans will be carrying fewer physical cards in their wallets, and the 16-digit credit or debit card number printed on every card will become increasingly irrelevant.

    ]]>
    Wed, May 22 2024 03:25:41 PM Wed, May 22 2024 03:26:17 PM
    Scarlett Johansson slams ChatGPT for using ‘eerily similar' voice https://www.nbclosangeles.com/entertainment/entertainment-news/scarlett-johansson-slams-chatgpt-for-using-eerily-similar-voice/3416913/ 3416913 post 9553047 Paul Morigi/Getty Images https://media.nbclosangeles.com/2024/05/GettyImages-2150477093-e1716255549116.jpg?quality=85&strip=all&fit=200,300 California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent.

    The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.

    The laws also reflect the priorities of the Democratic governor who’s walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry.

    “We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said in a statement. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

    Inspired by the Hollywood actors’ strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA.

    Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin’s style and material without his estate’s consent.

    “It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom,” SAG-AFTRA President Fran Drescher said in a statement. “They say as California goes, so goes the nation!”

    California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March.

    Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future.

    The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models.

    The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.

    ]]>
    Mon, May 20 2024 07:04:06 PM Mon, May 20 2024 07:04:06 PM
    Microsoft's AI chatbot will ‘recall' everything you do on a PC https://www.nbclosangeles.com/news/national-international/microsofts-ai-chatbot-will-recall-everything-you-do-on-a-pc/3416704/ 3416704 post 9552334 AP Photo/Lindsey Wasson https://media.nbclosangeles.com/2024/05/AP24141733685664.jpg?quality=85&strip=all&fit=300,200 Microsoft wants laptop users to get so comfortable with its artificial intelligence chatbot that it will remember everything you’re doing on your computer and help figure out what you want to do next.

    The software giant on Monday revealed a new class of AI-imbued personal computers as it confronts heightened competition from Big Tech rivals in pitching generative AI technology that can compose documents, make images and serve as a lifelike personal assistant at work or home.

    The announcements ahead of Microsoft’s annual Build developer conference centered on fusing its AI assistant, called Copilot, into the Windows operating system for PCs, where Microsoft already has the eyes of millions of consumers.

    Yusuf Mehdi, Microsoft executive vice president and consumer chief marketing officer, speaks during a showcase event of the company’s AI assistant, Copilot, ahead of the annual Build developer conference at Microsoft headquarters, Monday, May 20, 2024, in Redmond, Wash. (AP Photo/Lindsey Wasson)

    The new features will include Windows Recall, giving the AI assistant what Microsoft describes as “photographic memory” of a person’s virtual activity. Microsoft promises to protect users’ privacy by giving them the option to filter out what they don’t want tracked, and keeping the tracking on the device.

    It’s a step toward machines that “instantly see us, hear, reason about our intent and our surroundings,” said CEO Satya Nadella.

    “We’re entering this new era where computers not only understand us, but can actually anticipate what we want and our intent,” Nadella said at an event at the company’s headquarters in Redmond, Washington.

    The conference that starts Tuesday in Seattle follows big AI announcements last week from rival Google, as well as Microsoft’s close business partner OpenAI, which built the AI large language models on which Microsoft’s Copilot is based.

    Google rolled out a retooled search engine that periodically puts AI-generated summaries over website links at the top of the results page; while also showing off a still-in-development AI assistant Astra that will be able to “see” and converse about things shown through a smartphone’s camera lens.

    ChatGPT-maker OpenAI unveiled a new version of its chatbot last week, demonstrating an AI voice assistant with human characteristics that can banter about what someone’s wearing and even attempt to assess a person’s emotions. The voice sounded so much like Scarlett Johansson playing an AI character in the sci-fi movie “Her” that OpenAI dropped the voice from its collection Monday.

    OpenAI also rolled out a new desktop version of ChatGPT designed for Apple’s Mac computers.

    Next up is Apple’s own annual developers conference in June. Apple CEO Tim Cook signaled at the company’s annual shareholder meeting in February that it has been making big investments in generative AI.

    Some of Microsoft’s announcements Monday appeared designed to blunt whatever Apple has in store. The newly AI-enhanced Windows PCs will start rolling out on June 18 on computers made by Microsoft partners Acer, ASUS, Dell, HP, Lenovo and Samsung, as well as on Microsoft’s Surface line of devices. But they’ll be reserved for premium models starting at $999.

    While Copilot is rooted in OpenAI’s large language models, Microsoft said the new AI PCs will also rely heavily on its own homegrown “small language models” that are designed to be more efficient and able to more easily run on a consumer’s personal device.

    ]]>
    Mon, May 20 2024 03:36:06 PM Mon, May 20 2024 03:36:06 PM
    Illness took away her voice. AI created a replica she carries in her phone https://www.nbclosangeles.com/news/national-international/ai-replica-voice-phone/3410573/ 3410573 post 9532558 AP Photo/Steven Senne https://media.nbclosangeles.com/2024/05/AP24120732258763.jpg?quality=85&strip=all&fit=300,200 The voice Alexis “Lexi” Bogan had before last summer was exuberant.

    She loved to belt out Taylor Swift and Zach Bryan ballads in the car. She laughed all the time — even while corralling misbehaving preschoolers or debating politics with friends over a backyard fire pit. In high school, she was a soprano in the chorus.

    Then that voice was gone.

    Doctors in August removed a life-threatening tumor lodged near the back of her brain. When the breathing tube came out a month later, Bogan had trouble swallowing and strained to say “hi” to her parents. Months of rehabilitation aided her recovery, but her speech is still impaired. Friends, strangers and her own family members struggle to understand what she is trying to tell them.

    In April, the 21-year-old got her old voice back. Not the real one, but a voice clone generated by artificial intelligence that she can summon from a phone app. Trained on a 15-second time capsule of her teenage voice — sourced from a cooking demonstration video she recorded for a high school project — her synthetic but remarkably real-sounding AI voice can now say almost anything she wants.

    She types a few words or sentences into her phone and the app instantly reads it aloud.

    “Hi, can I please get a grande iced brown sugar oat milk shaken espresso,” said Bogan’s AI voice as she held the phone out her car’s window at a Starbucks drive-thru.

    Experts have warned that rapidly improving AI voice-cloning technology can amplify phone scams, disrupt democratic elections and violate the dignity of people — living or dead — who never consented to having their voice recreated to say things they never spoke.

    It’s been used to produce deepfake robocalls to New Hampshire voters mimicking President Joe Biden. In Maryland, authorities recently charged a high school athletic director with using AI to generate a fake audio clip of the school’s principal making racist remarks.

    But Bogan and a team of doctors at Rhode Island’s Lifespan hospital group believe they’ve found a use that justifies the risks. Bogan is one of the first people — the only one with her condition — who have been able to recreate a lost voice with OpenAI’s new Voice Engine. Some other AI providers, such as the startup ElevenLabs, have tested similar technology for people with speech impediments and loss — including a lawyer who now uses her voice clone in the courtroom.

    “We’re hoping Lexi’s a trailblazer as the technology develops,” said Dr. Rohaid Ali, a neurosurgery resident at Brown University’s medical school and Rhode Island Hospital. Millions of people with debilitating strokes, throat cancer or neurogenerative diseases could benefit, he said.

    “We should be conscious of the risks, but we can’t forget about the patient and the social good,” said Dr. Fatima Mirza, another resident working on the pilot. “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”

    Mirza and Ali, who are married, caught the attention of ChatGPT-maker OpenAI because of their previous research project at Lifespan using the AI chatbot to simplify medical consent forms for patients. The San Francisco company reached out while on the hunt earlier this year for promising medical applications for its new AI voice generator.

    Bogan was still slowly recovering from surgery. The illness started last summer with headaches, blurry vision and a droopy face, alarming doctors at Hasbro Children’s Hospital in Providence. They discovered a vascular tumor the size of a golf ball pressing on her brain stem and entangled in blood vessels and cranial nerves.

    “It was a battle to get control of the bleeding and get the tumor out,” said pediatric neurosurgeon Dr. Konstantina Svokos.

    The 10-hour length of the surgery coupled with the tumor’s location and severity damaged Bogan’s tongue muscles and vocal cords, impeding her ability to eat and talk, Svokos said.

    “It’s almost like a part of my identity was taken when I lost my voice,” Bogan said.

    The feeding tube came out this year. Speech therapy continues, enabling her to speak intelligibly in a quiet room but with no sign she will recover the full lucidity of her natural voice.

    “At some point, I was starting to forget what I sounded like,” Bogan said. “I’ve been getting so used to how I sound now.”

    Whenever the phone rang at the family’s home in the Providence suburb of North Smithfield, she would push it over to her mother to take her calls. She felt she was burdening her friends whenever they went to a noisy restaurant. Her dad, who has hearing loss, struggled to understand her.

    Back at the hospital, doctors were looking for a pilot patient to experiment with OpenAI’s technology.

    “The first person that came to Dr. Svokos’ mind was Lexi,” Ali said. “We reached out to Lexi to see if she would be interested, not knowing what her response would be. She was game to try it out and see how it would work.”

    Bogan had to go back a few years to find a suitable recording of her voice to “train” the AI system on how she spoke. It was a video in which she explained how to make a pasta salad.

    Her doctors intentionally fed the AI system just a 15-second clip. Cooking sounds make other parts of the video imperfect. It was also all that OpenAI needed — an improvement over previous technology requiring much lengthier samples.

    They also knew that getting something useful out of 15 seconds could be vital for any future patients who have no trace of their voice on the internet. A brief voicemail left for a relative might have to suffice.

    When they tested it for the first time, everyone was stunned by the quality of the voice clone. Occasional glitches — a mispronounced word, a missing intonation — were mostly imperceptible. In April, doctors equipped Bogan with a custom-built phone app that only she can use.

    “I get so emotional every time I hear her voice,” said her mother, Pamela Bogan, tears in her eyes.

    “I think it’s awesome that I can have that sound again,” added Lexi Bogan, saying it helped “boost my confidence to somewhat where it was before all this happened.”

    She now uses the app about 40 times a day and sends feedback she hopes will help future patients. One of her first experiments was to speak to the kids at the preschool where she works as a teaching assistant. She typed in “ha ha ha ha” expecting a robotic response. To her surprise, it sounded like her old laugh.

    She’s used it at Target and Marshall’s to ask where to find items. It’s helped her reconnect with her dad. And it’s made it easier for her to order fast food.

    Bogan’s doctors have started cloning the voices of other willing Rhode Island patients and hope to bring the technology to hospitals around the world. OpenAI said it is treading cautiously in expanding the use of Voice Engine, which is not yet publicly available.

    A number of smaller AI startups already sell voice-cloning services to entertainment studios or make them more widely available. Most voice-generation vendors say they prohibit impersonation or abuse, but they vary in how they enforce their terms of use.

    “We want to make sure that everyone whose voice is used in the service is consenting on an ongoing basis,” said Jeff Harris, OpenAI’s lead on the product. “We want to make sure that it’s not used in political contexts. So we’ve taken an approach of being very limited in who we’re giving the technology to.”

    Harris said OpenAI’s next step involves developing a secure “voice authentication” tool so that users can replicate only their own voice. That might be “limiting for a patient like Lexi, who had sudden loss of her speech capabilities,” he said. “So we do think that we’ll need to have high-trust relationships, especially with medical providers, to give a little bit more unfettered access to the technology.”

    Bogan has impressed her doctors with her focus on thinking about how the technology could help others with similar or more severe speech impediments.

    “Part of what she has done throughout this entire process is think about ways to tweak and change this,” Mirza said. “She’s been a great inspiration for us.”

    While for now she must fiddle with her phone to get the voice engine to talk, Bogan imagines an AI voice engine that improves upon older remedies for speech recovery — such as the robotic-sounding electrolarynx or a voice prosthesis — in melding with the human body or translating words in real time.

    She’s less sure about what will happen as she grows older and her AI voice continues to sound like she did as a teenager. Maybe the technology could “age” her AI voice, she said.

    For now, “even though I don’t have my voice fully back, I have something that helps me find my voice again,” she said.

    ]]>
    Mon, May 13 2024 05:56:12 AM Mon, May 13 2024 05:56:12 AM
    Bumble founder Whitney Wolfe Herd says the app could embrace AI: ‘Your dating concierge could go and date for you' https://www.nbclosangeles.com/news/business/money-report/bumble-founder-whitney-wolfe-herd-says-the-app-could-embrace-ai-your-dating-concierge-could-go-and-date-for-you/3409519/ 3409519 post 9528405 Taylor Hill | Getty Images Entertainment | Getty Images https://media.nbclosangeles.com/2024/05/107131817-1665410436264-gettyimages-1235841428-Forbes_30_Under_30_2021.jpeg?quality=85&strip=all&fit=300,176 Bumble founder Whitney Wolfe Herd says the woman-focused dating app is embracing AI.

    When discussing the future of Bumble at Bloomberg Tech in San Francisco, Herd, who recently stepped down from being the app’s CEO, says Bumble will use AI “to help create more healthy and equitable” dating experience.

    “You could, in the near future, be talking to your AI dating concierge and you could share your insecurities … and then it could give you productive tips for communicating with other people,” she says.

    The idea of using AI to help you flirt isn’t new. Tools like YourMove.AI and Love Genius use AI to craft daters more intriguing dating bios and messages.

    In the United States, 1 in 3 men ages 18 to 34 use ChatGPT for relationship advice, according to a recent survey on AI platform Pollfish. Just 14% of women in the same age range reported doing the same.

    Herd envisions Bumble taking AI technology a step further, though.

    “There is a world where your dating concierge could go and date for you with other dating concierge … and then you don’t have to talk to 600 people,” she says.

    This prediction comes at a time when singles feel burnt out from swiping. Almost half, 46%, of Americans says they have had somewhat or very negative experiences online dating, according to 2023 data from Pew Research Center.

    This new AI offering could curb some of that dating fatigue, according to Herd: “That’s the power of AI if harnessed the right way.”

    Want to make extra money outside of your day job? Sign up for CNBC’s new online course How to Earn Passive Income Online to learn about common passive income streams, tips to get started and real-life success stories. CNBC Make It readers can use special discount code CNBC40 to get 40% off through 8/15/24.

    Plus, sign up for CNBC Make It’s newsletter to get tips and tricks for success at work, with money and in life.

    ]]>
    Fri, May 10 2024 10:26:36 AM Sat, May 11 2024 06:27:07 AM
    With help from AI, Randy Travis got his voice back. Here's how his first song post-stroke came to be https://www.nbclosangeles.com/entertainment/entertainment-news/randy-travis-got-his-voice-back-ai/3405929/ 3405929 post 9515915 AP Photo/Mark Humphrey, File https://media.nbclosangeles.com/2024/05/AP24123716597012.jpg?quality=85&strip=all&fit=300,200 With some help from artificial intelligence, country music star Randy Travis, celebrated for his timeless hits like “Forever and Ever, Amen” and “I Told You So,” has his voice back.

    In July 2013, Travis was hospitalized with viral cardiomyopathy, a virus that attacks the heart, and later suffered a stroke. The Country Music Hall of Famer had to relearn how to walk, spell and read in the years that followed. A condition called aphasia limits his ability to speak — it’s why his wife Mary Travis assists him in interviews. It’s also why he hasn’t released new music in over a decade, until now.

    “Where That Came From,” which released Friday, is a rich acoustic ballad amplified by Travis’ immediately recognizable, soulful vocal tone.

    Cris Lacy, Warner Music Nashville co-president, approached Randy and Mary Travis and asked: “’What if we could take Randy’s voice and recreate it using AI?,'” Mary Travis told The Associated Press over Zoom last week, Randy smiling in agreement right next to her. “Well, we were all over that, so we were so excited.”

    “All I ever wanted since the day of a stroke was to hear that voice again.”

    Lacy tapped developers in London to create a proprietary AI model to begin the process. The result was two models: One with 12 vocal stems (or song samples), and another with 42 stems collected across Travis’ career — from 1985 to 2013, says Kyle Lehning, Travis’ longtime producer. Lacy and Lehning chose to use “Where That Came From,” a song written by Scotty Emerick and John Scott Sherrill that Lehning co-produced and held on to for years. He believed it could best articulate the humanity of Travis’ idiosyncratic vocal style.

    “I never even thought about another song,” Lehning said.

    Once he input the demo vocal (sung by James Dupree) into the AI models, “it took about five minutes to analyze,” says Lehning. “I really wish somebody had been here with a camera because I was the first person to hear it. And it was stunning, to me, how good it was sort of right off the bat. It’s hard to put an equation around it, but it was probably 70, 75% what you hear now.”

    “There were certain aspects of it that were not authentic to Randy’s performance,” he said, so he began to edit and build on the recording with engineer Casey Wood, who also worked closely with Travis over a few decades.

    The pair cherrypicked from the two models, and made alterations to things like vibrato speed, or slowing and relaxing phrases. “Randy is a laid-back singer,” Lehning says. “Randy, in my opinion, had an old soul quality to his voice. That’s one of the things that made him unique, but also, somehow familiar.”

    His vocal performance on “Where That Came From” had to reflect that fact.

    “We were able to just improve on it,” Lehning says of the AI recording. “It was emotional, and it’s still emotional.”

    Mary Travis says the “human element,” and “the people that are involved” in this project, separate it from more nefarious uses of AI in music.

    “Randy, I remember watching him when he first heard the song after it was completed. It was beautiful because at first, he was surprised, and then he was very pensive, and he was listening and studying,” she said. “And then he put his head down and his eyes were a little watery. I think he went through every emotion there was, in those three minutes of just hearing his voice again.”

    Lacy agrees. “The beauty of this is, you know, we’re doing it with a voice that the world knows and has heard and has been comforted by,” she says.

    “But I think, just on human terms, it’s a very real need. And it’s a big loss when you lose the voice of someone that you were connected to, and the ability to have it back is a beautiful gift.”

    They also hope that this song will work to educate people on the good that AI can do — not the fraudulent activities that so frequently make headlines. “We’re hoping that maybe we can set a standard,” Mary Travis says, where credit is given where credit is due — and artists have control over their voice and work.

    Last month, over 200 artists signed an open letter submitted by the Artist Rights Alliance non-profit, calling on artificial intelligence tech companies, developers, platforms, digital music services and platforms to stop using AI “to infringe upon and devalue the rights of human artists.” Artists who co-signed included Stevie Wonder, Miranda Lambert, Billie Eilish, Nicki Minaj, Peter Frampton, Katy Perry, Smokey Robinson and J Balvin.

    So, now that “Where That Came From” is here, will there be more original Randy Travis songs in the future?

    “There may be others,” says Mary Travis. “We’ll see where this goes. This is such a foreign territory. There’s likely more on the horizon.”

    “We do have other tracks,” says Lacy, but Warner Music is being as selective. “This isn’t a stunt, and it’s not a parlor trick,” she added. “It was important to have a song worthy of him.”

    ]]>
    Mon, May 06 2024 02:11:30 PM Mon, May 06 2024 02:12:30 PM
    What to know about Trump strategist's embrace of AI to help conservatives https://www.nbclosangeles.com/decision-2024/what-to-know-about-trump-strategists-embrace-of-ai-to-help-conservatives/3405813/ 3405813 post 7251037 Jonathan Ernst | Reuters https://media.nbclosangeles.com/2022/07/106719685-16012930382020-09-28t014456z_1128345174_rc2d7j958kg0_rtrmadp_0_usa-election-trump-parscale.jpeg?quality=85&strip=all&fit=300,200 Brad Parscale was the digital guru behind Donald Trump winning the 2016 election and was promoted to manage the 2020 campaign. But he didn’t last long on that job: His personal life unraveled in public and he later texted a friend that he felt “guilty” for helping Trump win after the riot at the U.S. Capitol.

    He’s since become an evangelist about the power of artificial intelligence to transform how Republicans run political campaigns. And his company is working for Trump’s 2024 bid, trying to help the presumptive Republican nominee take back the White House from Democratic President Joe Biden.

    Here’s what to know about Parscale and his new role:

    NEW AI-POWERED CAMPAIGN TOOLS

    Parscale says his company, Campaign Nucleus, can use AI to help generate customized emails, parse oceans of data to gauge voter sentiment and find persuadable voters. It can also amplify the social media posts of “anti-woke” influencers, according to an Associated Press review of Parscale’s public statements, his company documents, slide decks, marketing materials and other records not previously made public.

    Soon, Parscale says, his company will deploy an app that harnesses AI to assist campaigns in collecting absentee ballots in the same way drivers for DoorDash or Grubhub pick up dinners from restaurants and deliver them to customers.

    FROM UNKNOWN TO TRUMP CONFIDANT

    Parscale was a relatively unknown web designer in San Antonio, Texas, when he was hired to build a web presence for Trump’s family business.

    That led to a job on the future president’s 2016 campaign. He was one of its first hires and spearheaded an unorthodox digital strategy, teaming up with scandal-plagued Cambridge Analytica to help propel Trump to the White House.

    “I pretty much used Facebook to get Trump elected in 2016,” Parscale said in a 2022 podcast interview.

    Following Trump’s surprise win, Parscale’s influence grew. He was promoted to manage Trump’s reelection bid and enjoyed celebrity status. A towering figure at 6 feet, 8 inches with a Viking-style beard, Parscale was frequently spotted at campaign rallies taking selfies with Trump supporters and signing autographs.

    Parscale was replaced as campaign manager not long after a rally in Tulsa, Oklahoma, drew an unexpectedly small crowd, enraging Trump.

    ROLE IN 2024 CAMPAIGN

    Since last year, Campaign Nucleus and other Parscale-linked companies have been paid more than $2.2 million by the Trump campaign, the Republican National Committee and their related political action and fundraising committees, campaign finance records show.

    Parscale did not respond to questions from the AP about what he’s doing for the Trump campaign. Trump has called artificial intelligence “so scary” and “dangerous,” while his campaign, which has shied away from highlighting Parscale’s role, said in an emailed statement that it did not “engage or utilize” tools supplied by any AI company.

    Parscale-linked companies have been paid to host websites, send emails, provide fundraising software and digital consulting, campaign finance records show.

    The Biden campaign and Democrats are also also using AI. So far, they said they are primarily deploying the technology to help them find and motivate voters and to better identify and overcome deceptive content.

    TIES TO A WEALTHY GOP DONOR

    Last year, Parscale bought property in Midland, Texas, in the heart of the nation’s highest-producing oil and gas fields. It is also the hometown of Tim Dunn, a billionaire born-again evangelical who is among the state’s most influential political donors.

    In April of last year, Dunn invested $5 million in a company called AiAdvertising that once bought one of Parscale’s firms under a previous corporate name. The San Antonio-based ad firm also announced that Parscale was joining as a strategic adviser, to be paid $120,000 in stock and a monthly salary of $10,000.

    “Boom!” Parscale tweeted. “(AiAdvertising) finally automated the full stake of technologies used in the 2016 election that changed the world.”

    AiAdvertising added two key national figures to its board: Texas investor Thomas Hicks Jr. — former co-chair of the RNC and longtime hunting buddy of Donald Trump Jr. — and former GOP congressman Jim Renacci. In January, Dunn gave AiAdvertising an additional $2.5 million via an invesment company, and AiAdvertising said in a news release that the cash infusion would help it “generate more engaging, higher-impact campaigns.”

    Dunn declined to comment, and AiAdvertising did not respond to messages seeking comment.

    PARSCALE’S VISION

    Parscale occasionally offers glimpses of the AI future he envisions. Casting himself as an outsider to the Republican establishment, he has said he sees AI as a way to undercut elite Washington consultants, whom he described as political parasites.

    In January, Parscale told a crowd assembled at a grassroots Christian event in a Pasadena, California, church that their movement needed “to have our own AI, from creative large language models and creative imagery, we need to reach our own audiences with our own distribution, our own email systems, our own texting systems, our own ability to place TV ads, and lastly we need to have our own influencers.”

    ]]>
    Mon, May 06 2024 11:45:09 AM Mon, May 06 2024 11:58:10 AM