My Customers Don't Pay Trump's Tariffs; My discussion with Porter Stansberry about free speech on big social media platforms; My daughter and I are going to the Women's World Cup

Tuesday, July 2, 2019
A A

1) I read an interesting op-ed in yesterday's Wall Street Journal that highlights why Trump's 25% tariffs on many Chinese goods might not affect U.S. companies as much as some are predicting. I think this will prove to be the case with one of my favorite stocks, Lumber Liquidators (LL). My Customers Don't Pay Trump's Tariffs. Excerpt:

When President Trump imposed 25% import tariffs on $200 billion of Chinese goods, economists and trade pundits said the levies would pass through to U.S. consumers. As the owner of a consumer-electronics company that sources 90% of our products from China, I have a different perspective.

Most American companies won't roll over and pay 25% more for a product. They will demand lower prices from their Chinese factory sales representative...

Experts underestimate the leverage American companies wield in the trade conflict with China. We understand that, in the game of chicken between buyer and seller, the threat of pulling our orders puts significant pressure China's manufacturers – and the Chinese government. Of course, certain industries and companies can exert more leverage than others. But if my purchasing department can win significant price concessions, it's safe to assume others can as well.

2) I had an interesting discussion with my good friend Porter Stansberry that I wanted to share with Empire Financial Daily readers. We were talking about how the social media giants should regulate (or be regulated regarding) what's posted on their platforms. It was triggered by my June 14 e-mail, in which I wrote:

I just finished reading (actually, listening to on Audible at my usual 2.75x speed) the new book, Zucked: Waking Up to the Facebook Catastrophe, which is a devastating critique of the tech giants, especially Facebook and Google (in particular, YouTube).

What makes it especially troubling is the author. One would expect plenty of criticism from the ACLU or Sen. Elizabeth Warren, but in this case, it's coming from lifelong tech industry insider, investor, and entrepreneur Roger McNamee. As an early mentor to Facebook CEO Mark Zuckerberg, he knows the man and the company better than almost anyone.

At one time, McNamee was a big supporter, but he has now pivoted 180 degrees because he's horrified by how the big social media platforms – Facebook and YouTube in particular – have been used by the companies and their advertisers to violate privacy and manipulate individuals and have been abused by the Russians to influence elections, etc. McNamee now believes these platforms are doing more harm than good and is calling for strong government intervention.

As a citizen, I share his concerns. But as an investor, I think that the government is unlikely to do anything that will materially disrupt the business models and dominance of the tech giants, so their stocks are likely to do well.

In response, Porter wrote:

This isn't rhetorical...

I wonder why users expect Facebook to moderate content. And I wonder why elites like the New York Times and McNamee object to Facebook 'manipulating' its users. Isn't that what NYT does, too – with the content it produces?!

What's the difference between allowing Facebook to do what it wants and free speech?

If users want to read hate speech, why should Facebook stop them?

Shouldn't there be a free exchange of ideas? Even bad ideas?

I'm not asking you, per se, I'm asking how the regulators will ever parse these issues...

This stuff reminds me of how funny the reality of freedom is in practice compared to the elite-view of what people are and what they do with their freedoms...

I replied:

I agree, it's a tough issue. But that doesn't mean we should throw up our hands and give up – and allow Facebook to be used for ethnic cleansing in Myanmar, to pick an extreme example.

Porter wrote back:

I've never been to Mynmar... hell, I can't even spell it... But I have a lot of sympathy for Facebook if our moral standard is that Facebook's executives should be able to tell a "good user" from a "bad user" in regards to every political conflict in the world.

Who knows what really happens in these far-flung places?

Your "ethnic cleansing terrorist" is someone else's freedom fighter.

Again, I'm not saying that merely to argue. I know we have slightly different political orientations – though not nearly as different as you might think. The core difference is that I believe that most political action is very inefficient, because it requires coercion instead of relying on persuasion. And, of course, there are a lot of ways the government also uses persuasion, too.

I'm poking you about this issue because I don't think you appreciate how philosophical and imponderable these questions about content will become. My kids are only 11 and eight and already I can't regulate their communication. I don't understand why one takes such offense at something the other has said, because I don't speak their language. Yes, it's English... but I don't know what it means.

I do know, at the very root of human action and morality, that there's only one purely objective standard: the initiation of force. Violence = wrong. Likewise, fraud is violence through other means. In short, taking what doesn't belong to you through force or fraud is wrong. It's hard to go beyond that standard without invoking a cause or an idea that you believe is worthy of violence. And whether it is worthy or not ultimately depends on where you sit. An Israeli feels differently about violence in pursuit of a West Bank settlement than a Palestinian does. Who is right? Who can say? What is clear is that violence is wrong.

So, what's right and wrong on Facebook? Hard to say.

Fraud, clearly wrong. Someone posting something about you (or me) that's objectively false and meant to harm – that's violence, and it's wrong.

But what about if I say something abhorrent, but maybe not wrong and maybe not meant to harm any individual? What if I say something that's not even a fact – just an opinion? Like, "I bet Donald Trump has raped someone." Or "I don't think women should be scientists."

What if a whole bunch of Facebook users are saying things like this? Maybe things that are very upsetting to you or me... or even most Americans? What's the standard?

Seems to me that we have laws that govern such speech already, don't we? Why don't the offended parties simply engage the legal system we have now? Yelling "Fire!" in a crowded theater isn't allowed. But saying "we should burn down the statehouse" is allowed.

And... you know what... using Facebook is entirely optional. It's not like the sidewalk that we all share as part of the "commons." It's entirely private property. No one has to use it, ever, for anything. Not 911. Not the emergency broadcast system. Connecting to Facebook is an entirely private affair, too – not public airwaves, or highways, or even right-of-ways.

Given that unique circumstance, here's a crazy idea... a truly crazy idea...

Why not simply decree that words – that speech – ought not be regulated because the freedom to your own words and your own thoughts and your freedom to express them... is a human right, granted by God, not by man.

In fact, this principle is so important, maybe we should enshrine it in some way, make it a kind of "foundation" of our civil society – wouldn't that be neat? Just tell the government, yes, you have a monopoly on force. Yes, you have the power to tax... and to jail... and even to kill. But you don't have the right to abridge pure speech.

We could make it like the first law or something...

How would it go? Hmm...

"Congress shall make no law abridging the freedom of speech."

Just something clear like that, no wiggle room.

How do we regulate Facebook? Why not use common sense? Don't like Facebook? Don't use it.

Regards,

Porter

P.S. I hope you won't ever take my relentless libertarianism personally. I hold you in the highest regard. And I know you can have different views that are thoughtful and kind. They may even be right – in which case, I promise to change my mind.

And I replied:

Of course I don't take it personally. You make great points and I look forward to sharing this discussion with my readers.

I just don't think your utopian ideal of unfettered freedom of speech is realistic – or even consistent with your own words. You wrote that yelling "fire!" in a crowded theater is clearly wrong and illegal, and that "someone posting something about you (or me) that's objectively false and meant to harm – that's violence and it's wrong."

But that's exactly what happened in Sri Lanka, where extremists posted "false rumors [that] set Buddhist against Muslim" and led to "a Buddhist mob [setting] fire to Muslim-owned shops and homes... burning a man to death." A New York Times article concluded that:

A reconstruction of Sri Lanka's descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook's newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.

It was much worse in Myanmar, where Facebook has admitted that its platform was used to "foment division and incite offline violence." As this New York Times article reports:

Members of the Myanmar military were the prime operatives behind a systematic campaign on Facebook that stretched back half a decade and that targeted the country's mostly Muslim Rohingya minority group, the people said. The military exploited Facebook's wide reach in Myanmar, where it is so broadly used that many of the country's 18 million internet users confuse the Silicon Valley social media platform with the internet. Human rights groups blame the anti-Rohingya propaganda for inciting murders, rapes and the largest forced human migration in recent history.

While Facebook took down the official accounts of senior Myanmar military leaders in August, the breadth and details of the propaganda campaign – which was hidden behind fake names and sham accounts – went undetected. The campaign, described by five people who asked for anonymity because they feared for their safety, included hundreds of military personnel who created troll accounts and news and celebrity pages on Facebook and then flooded them with incendiary comments and posts timed for peak viewership.

How are these not examples of both the violence and fraud that you decry?

And what about here at home? Just today, the Wall Street Journal reports that Facebook, YouTube Overrun With Bogus Cancer-Treatment Claims. This could literally cost people their lives.

And how is it not fraud (and, I would argue, violence against our democracy) when a hostile foreign power, Russia, sets up countless fake accounts and groups on Facebook, Twitter, Instagram, YouTube, and elsewhere to feed Americans a never-ending stream of false information and inflammatory messages to tilt our election in favor of its preferred candidate?

The answer to this abuse of social media for the purposes of fraud and violence isn't to tell people to stop using social media. That's not going to happen. Rather, in the immortal words of Ross Perot, "If you see a snake, just kill it!"

Porter responded:

What do you think is more dangerous... a few extremists using Facebook to foment trouble in audiences that hate each other... or Facebook getting to decide what speech is allowed and what isn't?

Did you see any of the Russian Facebook ads? They were laughable.

They don't seem like a threat to our democracy any more than Saddam Hussein was a threat to our safety.

Saying the Russians are going to destroy us with Facebook so we need Facebook to regulate speech seems like the worst possible reaction I can imagine.

How about just reminding people that most of what you see, hear, and read on Facebook probably isn't true?

Isn't that good enough for most adults?

On the other hand, if you let Facebook decide what speech is allowed and what isn't, now you've got a much, much bigger problem. It's awfully hard to manage a debate that's not allowed to occur. Facebook will simply say the speech it doesn't like... or that the government doesn't like... isn't allowed.

And so, what do you think will happen the next time there's a severe financial bubble?

I remember when I started writing about GM's obvious financial problems in 2006 and 2007, explaining why it was inevitable that the company would go bankrupt. To refresh your memory: the company had been operating at a loss for 20 years and had been borrowing billions to repay lenders and to contribute to a failing pension system. It was a zombie business – it couldn't possibly make a profit because of its existing debt obligations and it continued to lose market share and margins every quarter. GM was clearly and obviously poised to collapse.

How did my readers respond?

Many of them were angry. Dozens of subscribers accused me of causing GM to fail because I was warning investors about GM's problems.

Many of them accused me of exaggerating the company's problems so I could make money shorting the stock. (I don't trade stocks I write about.)

One of these angry subscribers mailed me a death threat – to my home address. A very detailed and specific death threat.

Meanwhile, Facebook has already banned virtually all of our advertising.

Facebook won't allow any commercial speech that it says plays upon investors' fears – like our 'American 2020' advertisement that warns about the rise of socialism in America and the dire possible consequences for our economy and our currency.

Don't you think people should have access to this point of view?

What's the next step?

Well, when Facebook is allowed to regulate all speech, who knows what they will decide to let me say. Will Facebook decide that my editorial writing, like the financial research I published on GM, Fannie, Freddie or Lehman Brothers... is "fomenting a crisis" and not allow it?

Again, I'd rather take the risk of reading stuff I find offensive than give Facebook the power to make those decisions for me.

My final thoughts:

I agree that it's tough to determine what's prohibited violence and fraud versus protected free speech, however unpopular. And I'm 100% in agreement with you that Facebook – which, in reality, is just one person (Mark Zuckerberg) – shouldn't have so much power to regulate the speech of 2.4 billion people.

While libertarians won't want to hear it, this is why we have a government, elected by the people, to set up and enforce a system that balances all of the competing interests. It's been doing so since the founding of our country in countless offline contexts – the white supremacist rally in Charlottesville, Virginia... the Pentagon Papers... and the Nazis marching in Skokie, Illinois, to name a few. Now, the government needs to apply the same framework to the online world that increasingly dominates our lives, thoughts, and actions.

Germany, which passed a tough new online hate-speech law that went into effect at the beginning of 2018, provides an interesting model – and shows how difficult it will be: Germany Acts to Tame Facebook, Learning From Its Own History of Hate.

It won't be easy, but we mustn't let perfection be the enemy of the good. The alternative, what we have now, with extremists, charlatans, and foreign agents committing fraud, undermining our democracy, and fomenting hatred and even violence – not in a small way, but to a massive degree – is simply intolerable.

3) I'm cheering for the U.S. women's soccer team today. Win or lose, my soccer-mad, 20-year-old middle daughter and I will see them play this weekend in France. We fly to London on Friday morning and on Saturday will either fly to Lyon for the final or Nice for the consolation game. If you're going to be there (or even better, have two extra tickets!), let me know. Send me an e-mail at [email protected].

Best regards,

Whitney

Whitney Tilson

Get Whitney Tilson's Empire Financial Daily delivered straight to your inbox.

About Whitney Tilson

Prior to creating Empire Financial Research, Whitney Tilson founded and ran Kase Capital Management, which managed three value-oriented hedge funds and two mutual funds. Starting out of his bedroom with only $1 million, Tilson grew assets under management to nearly $200 million.

Tilson graduated magna cum laude from Harvard College with a bachelor's degree in government in 1989. After college, he helped Wendy Kopp launch Teach for America and then spent two years as a consultant at the Boston Consulting Group. He earned his MBA from Harvard Business School in 1994, where he graduated in the top 5% of his class and was named a Baker Scholar.

Click here for the full bio