Would you speak differently if you knew that a machine was listening to you and judging you based on what you said, whether you were using positive or negative words, or whether the sound of your voice was optimistic or pessimistic?
Apparently Wall Street executives speak differently. You are trying to play slot machine algorithms on calls to win.
You got from George Carlin's "7 Words You Can't Say On TV?" Heard? We may now have "words you can't say on a earnings report".
A recent study found that managers are increasingly avoiding using negative words and sounding more optimistic when making calls to earnings, so machine algorithms are more likely to rate the call as "positive" than "negative".
Oh man. Anything to fool the algae.
This is a new round in the war between machines and humans. Machines can fool people, but people also try to fool machines.
All of this makes sense if you understand the evolution of trying to figure out what "really" is going on with corporate profits.
First, there were earnings reports that emerged from the formation of the Securities and Exchange Commission in the early 1930s. Then there were calls to win. Then there were analysts trying to figure out the executives' "body language" on the calls to determine how they "really" felt about their business prospects. Then came machines that would listen to executives for key words that were considered important and decide whether the calls sounded "optimistic" or "positive" based on the words used.
Now there's a new twist: it seems like the executives figured out that the machines are listening, and if they (the executives) avoid certain words that sound "dejected" or "negative" they can improve the score they get receive. and their winning call will magically sound more positive.
For example, Sean Cao, Wei Jiang, Baozhong Yang, and Alan L. Zhang, authors of How to Talk When a Machine is Listening: Corporate Disclosures in the Age of AI, published on the website of the National Bureau of Economic Research.
Their main conclusion: "Companies with high expected machine downloads manage text sentiment and audio emotions in a way that is appropriate for machine and AI readers, such as by using words generated by computational algorithms versus those used by human Readers are perceived as negative, avoid different language emotions, which are preferred by software processors for machine learning. "
In other words, people use words that they think the machines want to hear and that gives them a more positive score.
The authors found that this effect was particularly noticeable in companies that had a very high level of interest in their submissions. In other words, the more people are paying attention, the more likely it is that leaders will change their behavior.
Of course we have known about the ability of machines to analyze profit views for years, but the authors say, "Our study is the first to identify and analyze the feedback effect, ie how companies adapt the way they speak, when they know machines are listening. "
OK, so we're in a huge hall of mirrors. People (investors) try to find out what other people (business leaders) really think about their company's prospects by listening to profit calls being analyzed by machines and people (business leaders) change their behavior so that the machines tell other people ( Investors) that things are better than they really are, or at least as good as executives really meant it to be.
Understood? What could go wrong?
"People take machines and use them to analyze emotional signals so we can analyze other people more efficiently," said Nicholas Colas of DataTrek. "But the machines are doing it on a scale that humans could never do. An infinite loop is being set up, and we expect it to be refined over time."
Even the study's authors are a little concerned about where this will ultimately lead us: "Such a feedback effect can lead to unexpected results like manipulation and collusion," they said.
Subscribe to CNBC PRO for exclusive insights and analysis as well as live business day programs from around the world.