‘Coming Together’: Easton students debut photographs at West Ward Market days after fire

‘Coming Together’: Easton students debut photographs at West Ward Market days after fire

EASTON, Pa. — Angel Sanchez and his friends smiled proudly Wednesday as they pointed at a large photo hung on a chain link fence.

The four buddies are among 12 middle school students who participated in an after-school photography class at Easton Area Community Center.

The young students attended the unveiling of their project, a photo exhibit called “Inside Outside World” at West Ward Market, between 12th and Northampton Streets.

  • The Inside Outside World project at Easton’s West Ward Market features hundreds of photos taken by young photographers
  • The photos will be on display through September
  • Shoppers at the West Ward Market donated to those displaced by Monday’s fire on Ferry Street

Joining them at the unveiling were Easton Mayor Sal Panto Jr., and Lisa Campbell, director of Easton Area Community Center.
The photo exhibit will be on display every Wednesday until September.

During the three-week class, the Easton-area students were taught about the history of portrait photography to inspire their own portraits and self portraits while learning techniques of composition, staging, lighting and color.

IMG_4135.jpeg

Micaela Hood

/

LehighValleyNews.com

Students pose with Polaroids they took in a photography class at the Easton Area Community Center. The students’ photographs are on display at the West Ward Market.

“I learned a lot like how to use cameras the old way,” Angel, 11, said. “It was a lot of fun. I enjoyed participating in the class.”

The workshop was taught by Ghen Dennis, founder of Overtown Media + Arts in Easton.

Dennis, who handed photography books to the youngsters as a gift at the event, volunteered her time and shared some of the printing costs for the project.

She said her goal was to show the students what it was like to take photos pre-digital times.

IMG_4142.jpeg

Micaela Hood

/

LehighValleyNews.com

Dozens of student-led photographs are on display at the West Ward Market on Wednesdays through September.

“They experimented with black-and-white Polaroid film and some of them had the idea to include the Polaroids in their [finished] shot,” she said.

“For a lot of young people, photographs aren’t material things. They get traded digitally. Thinking about having a tactile object, I think surprised them.”

Ghen Dennis, founder of Overtown Media + Arts in Easton

“For a lot of young people, photographs aren’t material things. They get traded digitally. Thinking about having a tactile object, I think surprised them.”

The project also was funded by the West Ward Community initiative.

‘All in this together’

IMG_4126.jpeg

Micaela Hood

/

LehighValleyNews.com

Residents of Easton drop off donations for victims of the Ferry Street fire on Wednesday, May 31 at the West Ward Market.

The unveiling of the students’ banners brought some happiness days as the neighborhood, between Sixth and 15th streets, is rallying together after a fire Monday that destroyed a dozen homes on the 900 block of Ferry Street and displaced 61 residents.

On Wednesday afternoon, residents were seen dropping off bags of non-perishable food and gently used clothing. One unidentified donor gave $450 in gift cards.

“It was all increments of $50, so that will help six families,” said Tanya Ruiz, manager of the West Ward Community Initiative, who lives a block away from the fire site.

”It is so sad that this happened, but there’s a lot of people helping out. People are coming together and giving back.”

Tanya Ruiz, manager of the West Ward Community Initiative in Easton

As she sorted through donations Wednesday, Ruiz said she watched as her neighborhood was engulfed in flames on Memorial Day.

”It is so sad that this happened, but there’s a lot of people helping out,” she said. “People are coming together and giving back.”

The clothing collected Wednesday will be distributed at an event held in the next few weeks at Paxinosa Elememtary School.

Charitable organizations such as the American Red Cross Pennsylvania Rivers Chapter, the Third Street Alliance, West Ward WISE and local church groups also are assisting those in need.

IMG_4124.jpeg

Micaela Hood

/

LehighValleyNews.com

Residents dropped off donations for families affected by the Ferry Street fire on Wednesday in Easton.

A ‘berry’ good day

Attendees at the exhibit listened to tunes by Scott Harrington, a local singer-guitarist, while celebrating “Strawberry Day” at the West Ward Market.

The market features dozens of locally sourced vendors selling fruit, vegetables, flowers, fresh eggs, baked breads, cookies and pastries, jams, honey and condiments.

It was founded in 2022 by the GEDP as part of a collaboration to expand access healthy food for residents throughout the Valley.

Shoppers can pay for goods with cash, credit or debit cards and EBT.

It is open 3 p.m. to 7 p.m. on Wednesdays through September.

16 photographers try to capture love’s complexity in New York exposition – La Prensa Latina Media

16 photographers try to capture love’s complexity in New York exposition – La Prensa Latina Media
image

By Jorge Fuentelsaz

New York, May 31 (EFE).- The intoxicating passion of a new relationship, betrayal, narcissistic love, violence, intimacy, politics, sex and devotion through illness and past death are some of the themes captured in photographs by 16 international artists which will be on display beginning Friday at New York’s International Center of Photography Museum (ICP).

“Love Songs, Photography and Intimacy” is the name of this exhibition conceived of as an old “cassette” on which songs were compiled as a present for a friend, relative or lover, exhibition curator Sara Raza told EFE.

She said she envisioned the exhibit as a way to rethink different kinds of relationships, and what that really means, thereby moving beyond the romantic version of how we think about love and intimacy.

Raza explained that, in putting together the exhibition, she was also interested in including and exploring different ways of recording, whether it be global, local, digital or analog.

Among the works on display, Raza emphasized that of US photographer Clifford Prince King, one of whose photos is on the cover of the exhibition’s catalog showing two young men seated on a grassy hill in the country and embracing.

It’s a photograph full of color that plays with the public, with LGBT love, with privacy, what is noticed and what is hidden, the urban world and rural spaces.

Also standing out among the photos in the exhibit are shots by Japan’s Nobuyoshi Araki, a series that dates back to 1971 and is made up of photos of his honeymoon with his wife Yoko Aoki. On the opposite wall is another of Araki’s photo series from 1989-1990 documenting Aoki’s illness and death.

The heartbreaking and violent love of US photographer Nan Goldin has also earned a place in the exhibit, which was put on display for the first time at the Maison Euopeenne de la Photographie (MEP) in Paris.

For this show, however, Raza has added five new artists to give it a more American accent and include a new layer – namely one “beyond romantic love” that is more political and more international.

The sad, naked look of Angel Zinovieff, the partner of photographer Collier Schorr, is also included in the extensive catalog, where works dating back to the 1950s are combined with more recent photos.

That’s the case with Franco-Dominican photographer Karla Hiraldo Voleau, who in her 2022 work “Another Love Story” presents a hybrid viewpoint between photography and narrative, where her own real-life experience is mingled with fiction.

In several photo panels, Hiraldo presents photos of her year-long romantic relationship with a former partner: trips, visits to the beach and taking baths together, chats at home and in bars, intimate moments in their room. These images all depict a happy relationship full of love, as recorded on Instagram.

And yet, between each group of Hiraldo’s photos a telephone conversation is interspersed in which the artist and another girl simultaneously discover that they are sharing the same lover without knowing it.

It’s a very tragic love story, Hiraldo told EFE, about how she found out that her ex-partner simultaneously had a life with another woman.

These are real photos, although the face of her “ex” is not seen, with snapshots that reproduce real moments where the person shown is an actor paid by Hiraldo and not, in fact, her former lover, with whom she shared an apartment for a year.

“It’s an auto-fiction,” she said, adding that she loves putting herself in between reality and fiction, erasing the border between what’s real and what is not, because, especially, in this project, “I didn’t know (the difference) either.”

Hiraldo’s work, and that of the other photographers, seeks to capture the complexities, elusiveness and subjectivity of love – Raza said – along with its inability to be quantified, its chemistry and incompleteness, the absence of some of its frequencies, just like the songs that are not included on those cassette tapes we made for others.

EFE

jfu/fjo/jrh/bp

Why now is the golden age of marketing measurement

Why now is the golden age of marketing measurement

What can the blend of science and art in marketing mix modelling offer the wider industry? By Jamie Parks-Taylor.

Measuring the effectiveness of marketing spend and the best allocation of media investment regularly ranks as one of marketers’ top objectives – not least when budgets are under ever more scrutiny. Marketing mix modelling (MMM) has become the go-to method of measuring the individual, and combined, impact of multiple marketing channels on business objectives – and intriguing developments in MMM are making it more accessible and powerful than ever before.

For much of the past 20 years industry attention has been focused on user-level digital attribution – driven by the emerging dominance of digital advertising and the comfort and interpretability that this deterministic form of measurement offers. There were limitations, namely that it only works with online channels, allows for duplicate conversions across digital channels and has a propensity to over-attribute conversions to digital channels, especially those lower down the purchase funnel like retargeting display and paid search.

But the positives were seen to outweigh the negatives, leading to a near hegemony for digital attribution in the 2010s. But in recent years it has been undermined by the deterioration of the data signals that it relies upon. The introduction of Apple’s app tracking transparency (ATT) in 2021, alongside other initiatives aimed at giving people more control over their personal data, have resulted in digital campaigns achieving fewer conversions, and marketers increasingly becoming uncertain about the effectiveness of digital channels.

This uncertainty has led to a renaissance in alternative measurements – in particular, econometrics (see Figure 1 ). Renewed interest in MMM has also been aided by exciting recent developments, which have significantly lowered the barrier to entry. Today, anyone – or team – with a decent understanding of the R or python programming languages and a good grasp of marketing effectiveness theory can run an end-to-end MMM project.

image

Figure 1: Google Search Trends

The recent release of several open-source (free to use) libraries, including those developed by tech giants like Meta, Google and Uber, has revolutionised MMM. The algorithms underpinning these solutions sit at the cutting edge of data science and machine learning, helping deliver reliable models with a high predictive accuracy.

MMM isn’t perfect; detractors label it more of an art than a science because the process requires human involvement. MMMs often produce multiple candidate models, and analysts must evaluate the models through the lens of past experience, to select the ‘best’ model for a given project. But the same criticisms can be levelled at all modelling, from government use to plan pandemic responses to those employed in the finance industry. This blend of science and art is invaluable in the explanation of complex systems underpinning critical decision making.

In comparison, digital attribution is less open to interpretation and more directional – once settings are selected (for example, attribution window) the metrics are simply provided and often treated thereafter as fact. Both digital attribution and MMM can suffer from a lack of causality. MMM’s can uncover relationships between marketing deployment and incremental uplifts in sales, but they don’t automatically prove a cause-and-effect relationship between media spend and sales. Compared to digital attribution, causality tends to be more explicitly stated within MMM, and the evaluation of the models via experiments can do a good job of resolving this dilemma.

The holistic approach of MMMs generally make them more insightful, actionable and impactful than digital attribution, especially for advertisers investing in offline and online channels. It remains the only marketing measurement tool that allows advertisers to compare all channels like-for-like on business-critical metrics like return on investment.

MMM is now evolving into unified marketing measurement (UMM). The biggest providers’ proponents of MMM, including Kantar and Google, are introducing UMM in 2023. Rather than referring to one specific framework, each firm has its own definition of UMM, the one unifier is that MMM lies at the heart of any UMM project. It’s the synthesis of MMM with a combination of other measurement solutions like digital attribution, experimental results and brand equity studies that makes UMM unified. The glue that holds these disparate measurement solutions together is typically Bayesian inference – a statistical model itself. In effect, UMM is an ‘ensemble’ approach – a collection of sub-models interacting within a grand model.

While UMM’s development is fascinating, it’s an incredibly nascent technology. There is no one best approach, and UMM projects are often expensive to deliver. Any advertiser looking to improve campaign impact measurement should begin with MMM and build from there. And with more paths to MMM than ever before, it’s a fantastic time to start.

Jamie Parks-Taylor is director of insight and analytics at Cream

Photography show opens at JCC gallery

Photography show opens at JCC gallery

The work of local photographer Diane Beatty opens today in the Thomases Family Endowment of the Youngstown Area Jewish Federation Art Gallery.

“Color vs. Black and White” will feature a few of her favorite photos, some in color, some in black and white and a few featuring both.

Beatty studied art at Youngstown State University in the 1980s, but her journey into photography didn’t begin until April 2014, when she felt compelled to photograph the demolition of General Electric on Hughes Street in Youngstown. That same year, she created a Facebook group called Youngstown Photography Group where she met other photographers that helped her learn and grow as an artist.

Since then, she has won many awards in local art shows and has been featured in the Regional Photography exhibit at the Butler institute of American Art.

According to Beatty, “You may wonder why I prefer to show a photo in black and white versus color. There are a couple factors. If the image is strong, sometimes color can be distracting. My eye is drawn to red, for example, so if it takes away from the composition, I will see how the photo looks in black and white. The other factor is if the sky is blown out. I don’t like to shoot on a sunny day, because of this. When I first started to exhibit my work, one of my YSU professors asked to see my work. After seeing it he asked, ‘Where is the color?’ The photos were all black and white. Since then, no matter where I go to photograph, I ask myself that question and specifically look for a colorful image.”

Beatty primarily is interested in historical and abandoned sites, relishing the challenge of capturing images in low light and composing unique photographs that tell a story.

Her work will on display through July 30 at the gallery, located inside the Jewish Community Center of Youngstown, 505 Gypsy Lane. A reception with the artist is scheduled from 1 to 3 p.m. Sunday.

Today’s breaking news and more in your inbox

Analysis | AI’s Age of Mimicry Will Make Human Mimes Cry

Analysis | AI’s Age of Mimicry Will Make Human Mimes Cry
image

“Shut the doors, please.” The director of a video production studio in London is wearing headphones and holding a clipboard as he prepares for our first take. He looks me up and down.

“The avatar is going to be wearing whatever you’re wearing,” he says. “You’re okay to have it like this?”

“Yep,” I nod, eager and ready.

He takes another look. “If you can take off the watch that’d be nice.”

Oh. I set it on the table.

“Thank you.”

As if we’re in Hollywood, an engineer in jeans snaps a clapper board. Behind me is a green screen. Above me are large lights radiating heat. In front is a camera with a teleprompter, the sort used by TV news anchors, and it’s showing some of the strangest dialogue I’ve ever seen.

I read the single sentence in front of me: “All the boys ate a fish.” 

At the director’s instructions, I roll my eyes from top to bottom and left to right, before reading a one-minute script with cringey sentences like, “You can see the positivity shine through as I’m friendly and warm as I speak.” I’m asked to read it several times.

The goal is to train an AI model to create a digital clone of myself. The “fish” statement covers all possible mouth movements, and the rest is just enough to train the model on a cheerful version of my face, body and gestures. The company creating my avatar, Synthesia, has made similar versions for more than 15,000 companies including McDonald’s Corporation, Accenture Plc and Amazon Inc. Want to make a marketing video? There’s a buffet of more than 150 avatars to choose from that speak in more than 120 languages, all based on real humans. There’s no need to rent studio space, cameras or lighting — just type a script and your avatar will say it. One manufacturer says they’ve saved 70% in video production costs with the method.  

Deepfakes used to be a scourge of the internet. Now they’re a legitimate tool for getting a human on video more cheaply and at scale. Instead of hiring an actor to present a corporate training video and paying for their travel and time, a company can use an avatar for a fraction of the price. You can expect to see more digital clones like these in the coming years, of celebrities in TV ads or of ageing bands like ABBA in whizzy new concerts. The proprietor of one virtual concert company tells me he’s been solicited by the families of several dead singers, eager to cash in on their abiding fame with clones that could revive them on the stage. Speculating on his avatar’s ability to act in the afterlife, movie megastar Tom Hanks recently half-joked, “If I wanted to, I could get together and pitch a series of seven movies that would star me in them in which I would be 32 years old from now until kingdom come.”

Synthesia says its videos are technically not deepfakes since they are generated from scratch, and deepfakes manipulate a pre-existing video of someone. But the spread of these avatars is surely one of the most head-spinning impacts of the rapid advances in so-called generative AI, which can now conjure artwork, mimic voices in music, clone entire faces and bodies, generate pop songs, screenplays, short stories and news articles. It hammers a virtual nail in the coffin of the creative process we know today, accelerating a transformation that is seeing humans outsource the work of their imagination and even their own likenesses.

Like those before it, this technological revolution will come at a price. The introduction of the printing press in the 16th and 17th centuries allowed us to spread ideas and literacy, but it also lost us the oral tradition of storytelling. In our own century, social media connected us but also inadvertently made us more disconnected. In the case of generative AI, there’s a chance it will gradually erode our creative skills and lead to soulless, increasingly derivative content on our computer screens, phones and TVs, a cribbed amalgamation of the content on which AI models have been trained. Art and writing will increasingly become “content.” We’ll see a lot more of what the recently deceased novelist Martin Amis decried as “herd writing,” clichéd phrases like “the heat was stifling.” What’s even more likely is that we’ll lose the experience of connecting with a human artist through their work.

Notwithstanding those potential consequences, a new generation of creators is understandably keen to exploit generative AI’s commercial potential and jumping into the fray. Consider Sydney Faith, an author who self-publishes young adult fiction on Amazon. In January she used ChatGPT to write Legends of the Shadow Woods, a collection of short stories based on Greek mythology.

She left most of the writing in the hands of the AI, asking it at the start, “What kind of fantasy world should I write about?” When the software gave several suggestions, she picked one, then got it to generate chapter outlines, and then to generate three paragraphs at a time, which she then copied and pasted into a document. It was a repetitive process, she says, but eventually she had a rough draft of each short story. The book, which discloses that it was written by ChatGPT, now has a four-star review on Amazon. “Passable,” says one reviewer, who says they were expecting worse. Faith says she’s sold a few dozen copies so far.

Those pitching generative AI to the creative classes argue that it makes getting content down on the page so much easier. Natalie Monbiot, who runs Hour One, a company that makes avatars like mine, says these new AI tools can solve “blank page syndrome.” She refers me to one of her clients, an entrepreneur named Ian Beacraft, who has been producing videos where an avatar of himself reads out technology news. He’s been using ChatGPT to write his scripts, making it quicker and easier to produce his videos. The blank page used to be a sticking point, Monbiot says, before invoking Silicon Valley lingo: These new tools are “removing friction” from the whole process.

Devin Finley is another artist who’s been using generative AI to replace himself on the screen. A New York-based actor, Finley has put his baritone voice to characters in video games and audio plays for about 10 years, but in the fall of 2022, an industry friend told him that Hour One was looking for actors who would “go virtual” by creating avatars of themselves, digital clones that could be automated to work on their behalf. The gig could be lucrative, the friend said.

Finley welcomed the chance to make some passive income and created a clone of himself, becoming one of the first of more than 150 other actors to have a deepfake made by Hour One. Many of them weren’t professional actors, but students, waiters, paralegals and a range of gig workers, according to Monbiot. Finley, who now has “multilingual synthetic actor” on his resume, earned an upfront fee for his time, and a few months after creating his avatar, it starred in the marketing videos of a media company, Monbiot said, declining to reveal the client’s name.

If these few examples point to a broader adoption of artificial “artistic” content, a mimicry of the real thing, then we can’t blame machines for everything. Humans have been laying the groundwork for this shift over the past decade or so, with Hollywood churning out sequels and re-imagined classics, formulaic TV shows like NCIS, and a Netflix diet of addictive television designed to keep you watching the next episode. News sites have become aggregators of other articles. Much of the content we see on screens today, in other words, is already a rehash of someone else’s work.

For now, most AI-generated video content is being pioneered on YouTube, but down the line, a television producer could use generative AI tools like ChatGPT to create a rough draft of a script that then gets polished by humans. They could, for instance, take a human-written rough draft and use ChatGPT to repurpose it in the style of Nora Ephron, Aaron Sorkin or other storied screenwriters. That prospect has given Hollywood executives unexpected leverage against screenwriters who are currently striking over financial security. One of the demands of the Writers Guild of America has been for studios to ban the use of AI to write scripts, shifting a job with a steady salary to a form of gig work. So far, the studios have rejected those requests.

What that likely means is that our TV consumption will have fewer human stories, and more AI-sourced derivative content that helps studios and streaming companies protect their bottom lines. Some of the most notable recent films have come from the personal experiences of their creators, from Steven Spielberg’s childhood in The Fabelmans, to Daniel Kwan’s Asian-American upbringing in Everything Everywhere All at Once. Perhaps movies like this, borne out of real-world, personal experience will narrow in reach to niche audiences, while the general public will lap up artificially designed concoctions of previous films that sold lots of tickets.         

In that sense, we’ll lose more of the ingredients that make artwork great. The story behind how a movie or piece of art is created is critical to how we end up judging it as good or bad, says Agustín Fuentes, a Princeton University anthropologist and author of The Creative Spark: How Imagination Made Humans Exceptional.  A work of art has value because of the process of creating it — the care and thought and even the errors contribute to its beauty.  “Think of the Mona Lisa,” says Fuentes. “I’m not impressed by it, but it is hugely important in the history of art, and seeing it matters because of the story behind its making and its historical context. None of that is replicable by AI. An AI model can make a perfect image copy of the Mona Lisa, but it cannot produce the Mona Lisa.”

In March, the New York-based poet and novelist Joseph Fasano tweeted a letter from a schoolteacher, asking if he’d take part in an experiment: “Would you be willing to come into our classroom and go head to head with ChatGPT: human poet versus AI poet?” Fasano and the chatbot would each have five minutes to come up with a poem for three different topics. The problem with this is that ChatGPT can generate poems in seconds because it’s been trained on poems that humans have spent countless hours on. What it spits out isn’t poetry; it’s content. 

Rick Rubin, the famed record producer who worked with music artists from Johnny Cash to Run-D.M.C., was recently asked in a podcast hosted by Bloomberg Opinion columnist Tyler Cowen about generative AI’s impact on art. His response was that for now, art from generative AI was mostly “decorative,” because it lacked humanity. “It’s the soul in it that makes it good,” he said.

What do soul, humanity and historical context mean anyway? It is hard even for the experts to define creativity, just as it is hard to put into words exactly what we will lose when AI-made content takes up more of what we read and see. But that loss will probably have something to do with the invisible force of human connection to which people like Rubin allude, and the act of feeling “moved” by created work. Twenty years ago, when I got my first reporting job at a local radio station, I learned an open secret among news readers: The best way to get people to perk up and listen to a news bulletin was not for the news reader to deepen their voice or to copy the mannerisms of other news readers, but simply to pay careful attention to the meaning of each word as they said it. The difference between reading with meaning and reading mindlessly was technically indistinguishable and difficult to articulate. But it worked. To this day, I only have to pay close attention to the words in a book I’m reading to my kids, and they’re enraptured.

Does that allude to the soul that Rubin described? It’s hard to know. But machines are not sentient, and they have no relatable struggle or backstory to move us when we encounter their created work. As AI researchers make generative AI models more sophisticated, with billions more parameters and datasets to draw from, art created by AI will probably also appear more inventive. That will only reinforce how little we know about what creativity is, and even erode our sense of exceptionalism among animals and machines. One of the reasons Sam Altman came to believe in the possibility of artificial super intelligence before co-founding OpenAI was his realization that if human intelligence could be simulated, humans weren’t all that unique to begin with.

Little wonder that AI scientists wrangle over whether AI is, or ever will be, creative. Melanie Mitchell, a computer science professor at Portland State University, says in her 2020 book, Artificial Intelligence: A Guide for Thinking Humans, that computers can be creative but they’re not quite there yet. When I emailed her in May 2023 to ask if she still believed that, she said “yes.”

Demis Hassabis disagrees. The British scientist who leads Google’s AI efforts was claiming eight years ago that AlphaGo, the Go-playing AI program developed by his unit DeepMind, was displaying remarkable signs of creativity. In its 2016 match against Lee Sedol, the program made a highly unconventional play known as “move 37,” surprising both Lee and the game’s commentators. The move, in which AlphaGo sacrificed a group of stones in the corner of the board to gain a positional advantage elsewhere, was so unexpected that experts thought it was a mistake, and Lee took an unprecedented 15 minutes to consider his response.

AlphaGo went on to win the game decisively. Hassabis and many other AI scientists put that down to the system’s creative prowess, posing a tantalizing possibility, that software had managed to create something totally unique from seemingly nothing.

It’s also possible that believing that makes us suckers. For Hassabis, Altman and other entrepreneurs who want us to buy into a vision of unfathomably smart AI systems, selling a story of software whose abilities are as unpredictable and mysterious as humans makes good business sense. In April 2023 for instance, Google CEO Sundar Pichai appeared on an episode of 60 Minutes to talk about Google’s ChatGPT competitor, known as Bard, where he mentioned a phenomenon he called “emergent properties.”

“Some AI systems are teaching themselves skills that they weren’t expected to have,” Pichai said on the program, explaining that one of the company’s AI models was able to translate Bengali even though it had only ever seen a few words of Bengali. Yet the company’s own research paper showed its leading AI model had been trained on Bengali. The system wasn’t being creative or intuitive. Its creators were exaggerating its capabilities.   

I am not downplaying the awe-inspiring potential of these machines. I’m also keen to avoid grasping for vague justifications for our exceptionalism as humans. With the exception of those who can’t wait for the Singularity to happen, we all want to distinguish ourselves from AI systems that seem to be rapidly on course to surpass us. But as more people use generative AI to write books and screenplays or conjure videos that could go viral on TikTok — a phenomenon that is undoubtedly coming thanks simply to the economics — it’s hard to deny that our creative skillsets will grow flabbier, and another avenue for human connection will narrow. The mysterious AI models with their “emergent properties” will seep further into our creative fields, working their magic while reinforcing the power and influence of the technology companies that created them.           

For now, my digital doppelganger is still sitting on some Synthesia server somewhere, waiting to be dusted off and used for a presentation, a TikTok video or a message to someone. As strange as it may sound, some staff members at the consultancy EY, previously known as Ernst & Young, have started sending talking avatars of themselves to clients instead of emails. I have yet to find a good reason to use it other than to promote this story, though. 

The actor Devin Finley, meanwhile, seems to have settled into a new career pattern, where his digital clone can earn a little extra money on the side. He recalls that when he was first asked about creating an avatar of himself, he hesitated. “Originally I thought this might be something that could take away from who I am,” he tells me. “Then I realized I am a unique, living being.”

In a future world of artificial content, that might well become a novelty.

More From Bloomberg Opinion:

• Don’t Go Down That AI Longtermism Rabbit Hole: Parmy Olson

• Will Chatbots Replace Money Managers?: Timothy L. O’Brien

• AI Experts Aren’t Always Right About AI: Tyler Cowen

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

More stories like this are available on bloomberg.com/opinion

©2023 Bloomberg L.P.