Technology Archives - Reason Foundation Free Minds and Free Markets Wed, 08 Mar 2023 23:11:47 +0000 en-US hourly 1 Technology Archives - Reason Foundation 32 32 Examining day-to-day crypto volatility and why it’s important Wed, 08 Mar 2023 15:00:00 +0000 Bitcoin, Ethereum, and other cryptocurrencies frequently exhibit daily price drops during bull markets and increases during bear markets far in excess of traditional assets.

The post Examining day-to-day crypto volatility and why it’s important appeared first on Reason Foundation.

Few asset classes have been more volatile over the past several years than cryptocurrencies. Bitcoin, trading above $20,000 at the time of this writing, exceeded $50,000 for two brief periods in 2021—and fell almost as low as $30,000 in between. Other high-profile cryptocurrencies, such as Ethereum and Dogecoin, have experienced similarly dramatic highs and lows. 

​​But cryptocurrencies are also exceptionally volatile over much shorter periods of time. ​Day-to-day price fluctuations of cryptocurrencies eclipse those of traditional currencies, stocks, and precious metals, and do so consistently across assets and time periods. This phenomenon is not entirely driven by the longer-term ups and downs reported in headlines. Bitcoin, Ethereum, and other cryptocurrencies frequently exhibit daily price drops during bull markets and increases during bear markets far in excess of traditional assets. The interactive chart below provides one way to visualize this day-to-day volatility—the daily percentage increase or decrease in price in U.S. dollars from the previous day. 

This interactive tool allows the reader to investigate the phenomenon of day-to-day volatility for different cryptocurrencies, traditional assets, and time periods. During the period 2018–2022, Bitcoin’s average daily change (​​measured as the absolute value of the percentage change from the previous day) was 2.87%, versus the Euro (0.34%), pound (0.43%), and yen (0.35%). Other major cryptocurrencies, such as Ethereum (3.76%), Ripple (4.04%), and Dogecoin (4.55%), exceed Bitcoin’s already-high fluctuations. 

The table below presents this statistic for each asset or index tracked by the data tool. 

Why is the day-to-day volatility of cryptocurrencies important? 

Despite much public discussion about cryptocurrencies as speculative investments or world-changing technology, their success ultimately hinges on widespread adoption as currencies—including as a medium of exchange. Day-to-day volatility creates exchange rate risk over short periods of time. This creates problems for a currency’s usefulness as a medium of exchange if one or both parties to the transaction need to quickly move their money into a different currency. Either the buyer or seller, or both, must take this exchange rate risk, increasing the transaction cost and, ultimately, the price. 

To date, the use of cryptocurrencies as a medium of exchange has taken off in only a small number of market niches, most notably dark net markets where mostly illicit goods are for sale. A 2018 article reported that Bitcoin’s high short-term volatility was adding to the cost and lowering the number of transactions on such platforms. 

There are likely multiple causes for the unusually high volatility of cryptocurrencies. While more widespread adoption may be part of the solution, other likely causes are structural and follow directly from the way cryptocurrencies are designed. Large banks and other financial firms hold huge reserves of traditional currencies, and stocks have market makers, both serving to smooth out short-term volatility and make exchange markets more liquid. Bitcoin, on the other hand, eschews large central intermediaries by design.   

Solutions lie in further entrepreneurial innovation, and that process is already well underway. Bitcoin’s ​​Lightning Network is designed to facilitate faster transactions at a larger scale. Stablecoins, pegged in value to fiat currencies like the dollar or other assets, eliminate high day-to-day volatility by design. They can be used to keep money in the crypto ecosystem—protected from short-term fluctuations and, in theory, easier and faster than traditional fiat currencies--to exchange with Bitcoin or Ethereum. However, their relative novelty opens the door for long-tail risk as well as fraud. 

These and other avenues carry some promise to address day-to-day volatility and make cryptocurrencies more viable for everyday use. But innovation must continue. The Lightning Network and Stablecoins both introduce the scope for large financial intermediaries and dependence on the fiat system that crypto pioneers sought precisely to avoid. Furthermore, the much larger number of people not yet sold on crypto may see these as further complications to already convoluted and risky alternatives to fiat. 

The crypto community must turn away from ​​voices such as Bitcoin maximalists that say the perfect solution is already in hand, and keep innovating and experimenting.  ​Regulators ​could do great harm by making rules that ossify this still-developing technology or cut off as-yet unrealized solutions that only a market process of discovery can deliver. 

We hope that the interactive tool provided here, which offers an intuitive way to visualize the phenomenon of day-to-day volatility in cryptocurrencies, will play a part in opening the conversation and potential for fresh ideas. 


We selected the top 10 cryptocurrencies by market capitalization from CoinMarketCap in addition to FTX’s FTT token. The top 10 cryptocurrencies include seven traditional cryptocurrencies and three stablecoins. We did not include the latter, which track the day-to-day volatility of fiat currencies by design, in the interactive chart, but do report their average daily changes in the summary table. Daily price and exchange rate data are sourced from Yahoo Finance via the R library quantmod. The only modification to the original source data occurred for the Ruble to Dollar data (RUBUSD=X). On Jan. 1, 2016, the original value appears to be off by a factor of 100, this value is divided by 100. Additionally, on June 13, 2022, and July 18, 2022, the adjusted close is outside of the bounds of the high and low—and inconsistent with historical data on the close price from The Wall Street Journal. These two values were replaced with the open price from the following day.

Daily percent change values are calculated from the percent change from the previous trading day’s adjusted close price. Our comparison of daily changes across different types of currencies and assets presents a challenge because different assets trade according to different schedules. Stocks trade on exchanges with daily opening and closing times and close on weekends and certain holidays. Traditional foreign exchange markets stay open around the clock, Monday through Friday, but close on weekends, and this is further complicated by time zones and different holidays globally. Cryptocurrencies trade continually.  

There is subjectivity inherent in addressing this issue. We chose to limit our analysis to the trading days of our traditional stock indices (S&P 500 & Russell 2000), which align with New York Stock Exchange trading days, and use reported adjusted close as the price. While this eliminates a small amount of data from the sample for cryptocurrencies, we conducted robustness checks and confirmed this does not drive our results about persistent differences in day-to-day percent changes. 

The post Examining day-to-day crypto volatility and why it’s important appeared first on Reason Foundation.

Reason Foundation’s amicus brief in Gonzalez v. Google answers many of the questions raised by Supreme Court justices Wed, 01 Mar 2023 19:20:48 +0000 Congress originally made clear that Section 230 is part of a law intended not to limit free speech but to allow the internet to grow “with a minimum of government regulation.” 

The post Reason Foundation’s amicus brief in Gonzalez v. Google answers many of the questions raised by Supreme Court justices  appeared first on Reason Foundation.

On Feb. 21, the United States Supreme Court heard oral arguments in Gonzales v Google. (You can listen to the arguments here via C-Span.) Reason Foundation submitted an amicus brief on the case in January, and I found it interesting to see how some of the justices dug in on the issues raised in our brief.  

This is a matter for Congress. Supreme Court Justice Brett Kavanaugh asked, “Isn’t it better for–to keep it the way it is, for us, and Congress–to put the burden on Congress to change that, and they can consider the implications and make these predictive judgments?” 

Reason’s brief pointed out that the questions raised in this case are matters of policy, not law, and Congress, not the Supreme Court, should resolve them.

“Whether Section 230 creates good policy is not a question for this Court to decide. That question remains where it was in 1996—with Congress,” Reason’s brief says.  

Congress originally made it clear that Section 230 is part of a law intended not to limit free speech but to allow the Internet to grow “with a minimum of government regulation.” 

Recommendations and “thumbnails” are not content creation. The petitioners argued that when a site creates ‘thumbnails’ that summarize or in some way represent the content they suggest you might want to click on, they are creating content. Chief Justice John Roberts questioned that argument, saying, “…it seems to me that the language of the statute doesn’t go that far. It says that –their claim is limited, as I understand it, to the recommendations themselves.”

This is central to the question before the Supreme Court—Is recommending or suggesting content that a user might want to see the same as creating that content in terms of liability?

As we argued in our amicus brief, this is not content creation. The central value proposition most online platforms offer customers is a way to find the content they want to consume, which requires some means of making recommendations. If any form of “you might like this” is equivalent to “here is what we think about this” in terms of liability, customers will no longer be able to get recommendations. 

Section 230 explicitly excludes most digital platforms from liability. Indeed, Justice Neil Gorsuch points out that Section 230 itself says that a content provider is defined by doing more than “picking, choosing, analyzing or digesting content” (it also includes “transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content”). These things are exactly what Google does to online content for its users, as do many other platforms, and so the letter of the law in Section 230 clearly states that Google’s core service is not content creation. 

Justice Kavanaugh stated, “[petitioner’s] position, I think, would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”

As we put it in our brief:

“…as both a provider and user of such software, [Google] falls squarely within the class protected by Section 230(c)(1). Insofar as Petitioners are seeking to hold Google liable for the consequences of having presented or organized the’ information provided by another,’ rather than for creating and publishing Google’s own information content, Section 230(c)(1) bars such liability.” 

Actively adopting or endorsing content is required to be liable. There was a lengthy conversation about whether the algorithms are “neutral” when they recommend like to like or if they could cross the line to adopt or endorse some content or be “designed to push a particular message,” as Justice Elena Kagan put it. Google’s lawyers argued that even if their algorithm did in some way push a piece of content, any harm from that content (like libel) flows from the original content, not the platform’s actions with respect to it.

In our brief, we argue that there is a line that can be crossed, but it would have to go beyond the activities defined in Section 230 as immune from liability:  

“There is, after all, a difference between a provider or user suggesting the content of others to its users or followers based on their prior history or some other predictive judgment about likely interest and a provider or user actively adopting such content as its own, such as by endorsing the truth or correctness of a particular message or statement. … YouTube is not taking a stance when it, having collected enormous amounts of data on a user’s interests, points that user to content relevant to those interests. For example, if YouTube sends a list of new cat videos to a user that has watched cat videos in the past, the separate information content of that organizational effort is no more than: ‘You seem to like cats, here is more cat content’.” 

It would be madness to make users of digital platforms liable for likes and shares. Finally, Amy Coney Justice Barrett raised the critical question of how the petitioner’s arguments would affect internet users like you and me:

“So, Section 230 protects not only providers but also users. So, I’m thinking about these recommendations. Let’s say I retweet an ISIS video. On your theory, am I aiding and abetting and does the statute protect me, or does my putting the thumbs-up on it create new content? … [B]ut the logic of your position, I think, is that retweets or likes or check this out, for users, the logic of your position would be that 230 would not protect in that situation either, correct?”

To which the petitioners responded that yes, it would.  

As we point out in our brief:

“Section 230 provides its protection not only to the ‘providers’ of interactive computer services, but to the ‘users’ of such services as well. Removing immunity from Google here would equally remove immunity for persons hosting humble chat rooms, interest- or politics-focused blogs, and even for persons who ‘like’ or repost the information content of others on their blog, their Facebook page, or their Twitter account… Petitioners’ theory is wrong and would lead to absurd results. Section 230 protects both providers and users of interactive computer services from liability for sharing, recommending, or displaying the speech of another. Any attempt to split liability regimes between the ‘providers’ and ‘users’ of interactive computer services, or to distinguish the choices made manually by individual users about what to recommend or highlight to others versus the automated incorporation of the same or comparable choices into an algorithm, would be completely divorced from the text of the statute.”  

Indeed, Justice Kavanaugh pointed out that many of the amici, including Reason Foundation, argued there would be significant damage to the digital economy if Section 230 were pulled back and people could no longer share a broad range of useful information via digital platforms.  

While we still have to wait months for the Supreme Court’s decision in Gonzalez v. Google, seeing the justices’ questions hitting on these crucial points was heartening. The exchanges in oral arguments seemed to crystalize that petitioners are asking the Supreme Court to go against the explicit language of the law Congress put in place to expand liability to online platforms for shared content and further to make users of online platforms liable for any content they like or share. That would be disastrous.  

The post Reason Foundation’s amicus brief in Gonzalez v. Google answers many of the questions raised by Supreme Court justices  appeared first on Reason Foundation.

FTC Chair Lina Khan’s consolidation of power is a feature of her approach to antitrust, not a bug Thu, 23 Feb 2023 22:10:00 +0000 New Brandeisians, led by Lina Khan, seek to move away from the consumer welfare standard of antitrust enforcement.

The post FTC Chair Lina Khan’s consolidation of power is a feature of her approach to antitrust, not a bug  appeared first on Reason Foundation.

The Federal Trade Commission and its chair Lina M. Khan have had a difficult start to 2023. On Feb. 1 a California federal district judge rejected the FTC’s attempt to block social media giant Meta’s acquisition of virtual reality fitness startup Within–a decision the FTC opted not to appeal. While few observers ultimately expected the FTC to prevail in court, the case was viewed as an early test of Khan’s attempt to “remake antitrust law” at the FTC, meaning its speedy and categorical rejection was bad news for Khan and her radical antitrust insurgency. 

But the real bombshell came two weeks later when FTC Commissioner Christine Wilson made a self-described “noisy exit” from the commission in the form of a Wall Street Journal op-ed on Feb. 14. It wasn’t Khan’s overhaul of antitrust law that Wilson said drove her out–the commission is bipartisan and dissent is commonplace. It was Khan’s alleged “disregard for due process and the rule of law” and “abuses of government power,” Wilson wrote, that prompted her, the lone Republican commissioner. to leave the FTC. (Noah Phillips, the commission’s other Republican, resigned in October 2022.) 

Wilson cites in detail Khan’s refusal to recuse herself from the commission’s failed bid to block Meta’s acquisition of Within. Before she joined the FTC, Khan had argued Meta (at the time named Facebook) should not be allowed to make any further acquisitions. Wilson says she objected to Khan’s refusal to recuse herself on both due process and ethical grounds but was overruled by the Democratic commissioners and Khan herself. Wilson made a similarly futile attempt to object to the recently proposed FTC blanket ban on non-compete clauses in employment contracts. 

The FTC is not an organization intended to be adversarial to the companies under its regulatory purview, but rather a neutral arbiter of whether any harm would come from mergers and other conduct it scrutinizes.  

More information regarding the rule violations alleged by Wilson is likely forthcoming. But those who have followed the antitrust philosophy of Khan and her allies on the progressive left should have little trouble connecting the dots between their antitrust goals and the wrongdoings alleged by Wilson. Fundamental to Khan’s vision is the scope and necessity for “good” government power to act as a check on bad “concentrated private power.” 

Khan ignited the left’s newfound interest in antitrust with a 2017 paper critical of the widely adopted consumer welfare standard (which focused on prices) as weak and overly permissive to mergers. Her Yale Law Review article took aim at Amazon, specifically its capacity for predatory pricing to harm competitors and vertical integration to compete with sellers on its own platform. Amazon was but one example. The point was encouraging a much more active use of antitrust enforcement to check what Khan and others believed was the outsized influence of large corporations—a point driven home by the title of Columbia Law professor, and Khan ally, Tim Wu’s book, The Curse of Bigness

Under this logic, the potential bad conduct by large private firms is limited only by one’s imagination. And prior to its ascendance in the Biden administration, the movement alternately known as “hipster antitrust,” “break up big tech,” and New Brandeisianism put its imagination to work. In addition to product market monopoly, there was labor market monopsony, vertical restraints, coercion and gatekeeping, and (as in the case of Meta and Within) power in predicted markets of the future. Perhaps the starkest case of this movement believing big is bad is their belief in the threat of market power to democracy. Some on the left have argued that large corporations, through their money, could boost certain political campaigns (likely to candidates who disagree with such hyperactive use of antitrust enforcement). 

None of these scenarios are implausible, but they remain hypothetical. Rather than clarify the types of conduct deemed anti-competitive, a long and expanding list for regulators to scrutinize is de facto discretionary power. In effect, the New Brandeisians sought to move from the consumer welfare standard of antitrust enforcement to the standard that mandates companies compete in the manner that regulators would like them to. 

Khan’s goal of restraining the growth and dynamism of American business as an end unto itself was on full display in Nov. 2022 when the U.S. Securities and Exchange Commission issued new policy guidance regarding its role under Section 5 of its charter to prohibit “unfair” competition. Claiming a mandate that went beyond antitrust legislation and court precedent, the commission stated that it could take action against competitive conduct deemed “coercive,” “exploitative,” “abusive,” or “restrictive,” leaving these terms subjective and undefined.

It was, as Wilson noted in her resignation op-ed, an “I know it when I see it” approach. Wilson’s concerns about due process and the rule of law appear well-founded. 

Khan now faces the public allegations that, in her first year as FTC chair, she waged war on the perceived specter of concentrated private power by concentrating an unprecedented amount of public power for herself and friendly FTC commissioners.

Thus far, her efforts have almost entirely failed. The tides could turn as neither Republicans nor Democrats appear eager to bury their respective hatchets with big tech. But the biggest name in the movement once sarcastically labeled the hipster antitrust movement as a throwback to the days before the consumer welfare standard has instead garnered criticism and a high-profile resignation for allegedly neglecting legal norms that have stood far longer tests of time.

The post FTC Chair Lina Khan’s consolidation of power is a feature of her approach to antitrust, not a bug  appeared first on Reason Foundation.

Changes to Section 230 would have devastating consequences ​for the internet and free speech Tue, 07 Feb 2023 18:30:58 +0000 Weakening Section 230 would ensure that whichever political party is in power at a given time could steer the speech that is allowed online.

The post Changes to Section 230 would have devastating consequences ​for the internet and free speech appeared first on Reason Foundation.

The Supreme Court is considering a very important case regarding the future of the internet and digital platforms, from search to social media. As SCOTUS Blog puts it, Gonzales v. Google questions

Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information. 

Reason Foundation submitted an amicus brief in the Gonzales v. Google case in which we argue that Section 230 is functioning as Congress intended, with great benefits to the users of the digital platforms, and that the change in interpretation of the law that plaintiffs are asking for would have devastatingly negative consequences. ​

The following segments are pulled from that amicus brief and quoted at length to convey our arguments.  

Section 230 and congressional intent 

Reason Foundation’s brief argues that the plain text of Section 230 precludes considering a digital platform to be a publisher just because they use an algorithm to organize and present the content its users provide to others who might be interested in it. Indeed, when Congress passed the Communications Decency Act, it included congressional findings and purposes that make this clear. ​Reason’s amicus brief states:  

​​​What Congress did know is that, for the Internet to grow, it had to be left alone without fear of the “litigation minefield,” … that would cripple its expansion in its infancy if the providers and users of interactive computer services could be found liable for the content created by others.  

Congress intended the government, including the judiciary, to get out of the way of the Internet’s growth. Congress explained that the goal of Section 230 is to “promote the continued development of the Internet” by, among other things, “encourag[ing] the development of technologies that maximize user control over what information is received by” those “who use the Internet and other interactive computer services.” Id. (b)(1), (3). And Congress expressed its goal of “preserv[ing] the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.” 

Crucial to this intent is understanding that using algorithms to organize, present, or prioritize content created by users does not equate to publishing or to accepting liability for the content shared.  Technically, a chronological timeline of user content would still be a simple algorithm. Just because services use other algorithms to present content does not fundamentally change their legal standing under section 230: 

Like every other court to decide this issue, this Court should recognize that “[m]erely arranging and displaying others’ content to users of [YouTube] through… algorithms—even if the content is not actively sought by those users—is not enough to hold [YouTube] responsible as the ‘developer’ or ‘creator’ of that content.”  

The benefits of broad Section 230 protections 

Section 230 has provided immunity from liability for content posted by others, which has greatly leveled the playing field in terms of information available. Interactive services make it easier than ever to reach others online: 

The technological innovations made possible by Section 230 have also greatly increased the ability of the average American to spread ideas. Section 230 protections … allow interactive computer services to provide their users, rich and poor alike, with a reach that, historically, would not have been available even to the most privileged classes with access to the gatekeepers of the institutional press. Indeed, because of its ability to expand the reach of speech, Section 230 has been described as “the internet’s First Amendment—possibly better.” 

Section 230 has allowed for a proliferation of information sources that consumers have utilized for everything from general news to the details of specific products. Analysis of how consumers use the internet and digital platforms shows how crucial it is that Section 230 enables user-submitted reviews of products, something platforms can share without fear of liability. Those reviews make it easier for consumers to find a product that meets their needs. One Internet Association (IA) survey of how consumers use online reviews found:  

  • 67% of respondents said they check online reviews either most of the time or every time before buying products in person or online 
  • 72% said it is highly important for a business to have positive online reviews before they buy 
  • 85% either strongly agreed or somewhat agreed that they would be less likely to purchase products online that did not have any reviews 
  • 65% responded with a seven, or above, out of 10 when asked how much they trust online reviews on a scale of one to 10

As we explain in the amicus brief: 

[R]esearch shows that “[b]uyers are looking to their peers to understand which products and services will benefit them, as peers can provide unbiased, individualized information.” Neither consumer reviews—nor the purchases they lead to—would be possible without Section 230 protections, particularly if platforms were potentially responsible for any reviews they hosted or organized in a manner useful to other shoppers.  

Beyond the importance of being able to share user reviews without liability for them, Section 230 is crucial to the flourishing of small businesses in a world where the digital side of doing business is crucial. 

After all, a “single small provider may use multiple large providers to operate their own service or forum” by, among other things, “maintain[ing] accounts and advertis[ing] across multiple social media and other services, in addition to relying on ISPs, domain name registrars, and hosting providers.” If larger providers lacked protections for content posted by smaller providers, it is unlikely that they would make their services available to them. Loss of Section 230 protections could thus harm not only digital platforms that have dominant market share, but every-one down to the atomized worker in the gig economy merely trying to get word out about her services and to be matched with users most likely to be interested in such services. 

We can’t all be publishers 

The narrow interpretation of Section 230 sought in this case would make everyone who likes or shares content on a digital platform a publisher and liable for that content, which would be patently absurd. There are many reasons to like and share content besides endorsing it.  Again from the brief:

If Petitioners’ theory is correct, and Google truly is liable for recommending the content created by its users via an algorithm or otherwise, then every time a user of an interactive computer service shares a video, blog, or tweet created by another, then that user would become a developer of the underlying content and face potential liability for such content. Indeed, under Petitioners’ approach, by sharing another user’s content, the sharing user becomes the means by which that content reaches a broader audience. This is no different than what an algorithm does. 

And, as with algorithms, even though retweeting or sharing the content of another may not be an endorsement of a particular message, both are—at the very least—recommendations that the retweeted or shared content be viewed. Put differently, if suggesting content via an algorithm somehow falls outside of Section 230’s gambit by magically transforming one party’s speech into the speech of the platform, so too does retweeting it. 

All of this does not equate to an argument that digital platforms are never publishers. If a platform affirmatively endorses an idea or content, then that would constitute publishing and be subject to liability. However, given that some type of algorithm, chronological or other, is required to present information to users, the mere presentation of content cannot and should not constitute publication. 

Section 230 clearly is meeting the intent of Congress when it was created. If there are real problems with the current rules governing sharing digital content, Congress can fix them with new legislation. That is how such problems should be solved, not by the Supreme Court reinterpreting what Congress quite clearly intended. We would argue that Congress should not mess with Section 230 and should avoid trying to fix what is not broken.  

Who is more responsible?

The recent so-called Twitter Files and the Facebook Files revealed how, in recent years, the federal government has pressured and implicitly threatened digital platforms to suppress some speech the government did not like. However unhappy you may be with the content moderation decisions of any digital platform, giving the government the power to regulate those decisions has a clear outcome–more of what we saw them do to Twitter and Facebook, even when the government did not have the clear legal authority to do so. Weakening Section 230 would ensure that whichever political party is in power at a given time could steer the speech that is allowed online, and the online speech we see would be even more partisan than today. No one wants that.  

The post Changes to Section 230 would have devastating consequences ​for the internet and free speech appeared first on Reason Foundation.

Amicus Brief: Gonzalez v. Google Thu, 19 Jan 2023 20:48:00 +0000 For nearly three decades, Section 230 has served as the backbone of the Internet, precisely as Congress correctly anticipated and intended.

The post Amicus Brief: Gonzalez v. Google appeared first on Reason Foundation.

No. 21-1333 

In the Supreme Court of the United States 




On Writ of Certiorari to the United States Court of Appeals for the Ninth Circuit 

Brief for Reason Foundation as Amicus Curiae supporting respondent


I. For nearly three decades, Section 230 has served as the backbone of the Internet, precisely as Congress correctly anticipated and intended. The legislatively enacted congressional findings and purpose favor an expansive reading of Section 230’s protections in the event of any uncertainty or perceived ambiguity in the language of Section 230(c)(1).

A. Section 230’s benefits were by design, even if Congress could not have predicted every detail—or challenge—of a growing Internet. What Congress did know is that, for the Internet to grow, it had to be left alone without fear of the “litigation minefield,” Resp. Br. 19, that would cripple its expansion in its infancy if the providers and users of interactive computer services could be found liable for the content created by others. Congress thus enacted Section 230 with a list of policy statements that show what it intended and expected the statute to do: protect platforms and users from liability for the speech of others and promote the growth and use of interactive computer services.

Congress explained that the goal of Section 230 is to “promote the continued development of the Internet” by, among other things, “encourag[ing] the development of technologies which maximize user control over what information is received by” those “who use the Internet and other interactive computer services.” Id. (b)(1), (3). Section 230 has done that. Congress also expressed the importance of “preserv[ing]the vibrant and competitive free market that exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.” Id. (b)(2) (emphasis added). Section 230 has created that world, too.

Those policy statements are not mere pieces of legislative history entered into the Congressional Record by opportunistic politicians or their staffers—to the contrary, they are the product of bicameralism and presentment just like any other duly enacted legislation. And such statements are entitled controlling weight regarding what policy considerations might potentially influence the interpretation of Section 230. Whether Section 230 creates good policy is not a question for this Court to decide. That question remains where it was in 1996—with Congress.

B. Even years after Congress’s legislative findings and purpose, Section 230 has overwhelmingly fulfilled such legislative predictions and goals. By providing immunity from liability for the content posted by others, it has allowed for the development of new technologies that make it easier for everyone to find information online, to organize and to let others help organize the information they receive, and to associate both directly and indirectly with people around the world sharing common interests. These advances in technology have also led to the development of all manner of social media sites, including video-based platforms, dating apps, and even improved traditional chatrooms providing users many of the same organization tools as providers themselves.

The improved ability to find and organize information online is only one of the many benefits of Section 230. It also has led to an exponential growth in the amount of speech online. As providers have innovated and users have enthusiastically participated in online speech free from “the “specter of liability,” Zeran v. America Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997), interactive computer services have made it easier for ideas to spread than ever before in human history. Through retweets and other user engagements, the views and content created by even the poorest Americans can spread around the country and world in a way that wouldn’t have been possible just twenty years ago.

Other benefits from Section 230 abound. The economic benefits to innovators, providers, users, and the economy as a whole have been tremendous. It has facilitated the gig economy by allowing individuals and small businesses to flourish on websites provided by bigger platforms. It has also allowed consumers to directly review products and other services, make those reviews readily available online for the next consumer, and pass along or comment upon reviews by others, thus democratizing the marketplace of products and services as well as the marketplace of ideas. Thus, insofar as such practical considerations matter to the interpretation of Section 230(c)(1), the findings and purposes of Congress are not only controlling, they are right.

II. The language of Section 230 both reflects such Congressional policy and confirms that Respondent should prevail in this case.

A. An “interactive computer service” “provides or enables computer access by multiple users to a computer server.” 47 U.S.C. § 230(f)(2). “Interactive computer services” expressly include “access software providers,” which—as relevant here—are providers of software or tools that can “pick, choose, analyze, or digest,” “transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.” Id. (f)(2), (4)(B), (C). The providers of such services and their users can both create their own information content and can organize, transmit, and provide access to information content provided by others.

B. YouTube’s algorithm, which organizes and reorganizes the content uploaded to YouTube by others, thus performs a function which Congress expressly included in the definition of an interactive computer service. Indeed, as both a provider and user of such software, Respondent falls squarely within the class protected by Section 230(c)(1). Insofar as Petitioners are seeking to hold Google liable for the consequences of having presented or organized the “information provided by another,” rather than for creating and publishing Google’s own information content, Section 230(c)(1) bars such liability.

To the extent any given algorithm or other organizational policy or choice might be said to create Google’s own “content,” the further question becomes the precise parameters of such content as distinguished from the content of others. That distinction helps clarify that even where an algorithm or other organizational action or policy itself might create some information content (appending a warning label for example), a user or provider may only be held responsible for that information alone, and not the underlying information “provided by another.” Alternatively, if YouTube or any other user of its service were to expressly adopt or endorse the information content of another as its own, such adopted content may well fall outside of Section 230’s protection.

But merely identifying, organizing, or even recommending the content of another is a far cry from adopting it as your own. YouTube’s algorithm, for example, analyzes different users’ activity and viewing behavior to predict what that user might find interesting and to organize further information content provided by others according to such predictions. Though the algorithm’s analysis and predictions are more automated and sophisticated than manual efforts to organize or recommend content in a manner appealing to users, it remains fundamentally the same as the manual choices exercised by chatroom moderators, bloggers, and indeed, any individual user who selects, reposts, “likes,” or otherwise passes along the information content of others in a way such user believes might be interesting or appealing to her followers and potential followers. Such organizational effort by both providers and users of interactive computer services is precisely what Congress anticipated and intended to encourage via Section 230, and the text provides broad protection reflecting that purpose.

The post Amicus Brief: Gonzalez v. Google appeared first on Reason Foundation.

The pitfalls of regulating app stores Tue, 20 Dec 2022 05:00:00 +0000 Policymakers should continue to let app stores innovate and evolve without policy intended to force them into certain practices. 

The post The pitfalls of regulating app stores appeared first on Reason Foundation.

Despite being fewer than 25 years old, digital software marketplaces known as application stores, or app stores as they are commonly known, have become some of the largest software sales platforms in the world. In 2021, first-time app installs grew to 143.6 billion with consumer in-app spending growing to $133 billion. These figures are only expected to continue to grow. 

App stores act like a filter for software products by setting security, privacy, financial, and performance standards. Developers must adhere to these standards if they want to sell their software through an app store. This ensures compatibility with devices and protects consumers from malware or other intrusive software. App store owners typically charge a commission to developers who offer paid apps or in-app purchases through the store. 

Recent litigation and the introduction of a federal bill, the Open App Markets Act (OAMA), demonstrates public concern regarding the power of app store marketplaces. Georgia, Hawaii, Illinois, Minnesota, and New York have seen bills similar to OAMA introduced at the state level.  

The regulations proposed by OAMA, which won’t pass in this Congress but may be taken up again by the next Congress, are intended to apply exclusively to platforms with more than 50 million U.S.-based users and would require access to all app stores from covered platforms, referred to as “interoperability.” Therefore, OAMA would require covered platforms to provide access to third-party app stores, essentially meaning that users should be able to access any app store from a covered device. The bill also requires covered platforms to allow for “sideloading,” the practice of installing apps from places other than an official app store.

Federal legislation like OAMA (or whatever comes out of the next Congress) seems to intend to create a more open market with greater competition among app store marketplaces. Unfortunately, it would likely fail to do so while creating additional problems. The following are four considerations and concerns about the requirements that would come with the potential passage of OAMA and similar legislative proposals. 

For the purposes of this piece, terms will be simplified. Within Apple systems, iOS is the operating system and Apple App Store is the app marketplace. Within Google systems, Android is the operating system and Google Play Store is the app marketplace. To simplify, wherever possible, these unique systems will simply be referred to as “Apple” or “Android” to identify two unique operating systems and app stores. 


App store legislation could create major changes to data security and privacy practices. Every day, app stores handle a tremendous amount of web traffic. Apple’s App Store had more than 143 billion app downloads and processed over $85 billion in revenue in 2021. Google’s Play Store had more than 111 billion app downloads in 2021 while processing some $12 billion in revenue. 

Because these platforms process app purchases, sensitive financial and personal identification data must be protected. Even for free apps, each download is connected to an individual’s name, email address, device ID, and IP address. Firms have taken different approaches to data protection, which offers consumers choices between different app store data security practices.

For example, Apple differs from Android in that it has created an “ecosystem” in which Apple controls all aspects of the product from device manufacturing to the operating system. At most, there are five different hardware configurations for the iPhone and they all use the same iOS operating system (unless the user elects not to update to more recent versions). Apple then manually verifies each app using a three-layered security system to prevent malware from entering devices through the app store, the most common route for hackers.

Apple has deliberately chosen a closed approach because, among other reasons, they believe that it is more secure. While there are multiple ways to measure security, the Nokia Annual Threat Intelligence Report studies how often harmful malware targets different devices and software. The 2020 report shows that devices running Android were infected with malware at 15 times the rate of iPhones. 

This is likely due to the fact that android is an open-source project, meaning that anyone can use and modify the system without any fees. As a result, there is greater diversity in both devices and software. There are at least 3,000 different hardware configurations and at least a dozen different versions of the Android OS. Android deploys a proprietary anti-malware system that reviews every app in its ecosystem to the tune of over 100 billion apps a day to prevent malware.

Consumers have choices between closed and open systems and can evaluate the strengths and weaknesses of each approach. Android has more hardware options at more prices and has over a million more apps in its store than Apple does. This suggests that an open-source approach provides greater choice in terms of price, devices, and the total number of apps. However, there appears to be a greater malware risk in running Android. Consumers can opt for Apple if they value more security but do not mind losing access to certain apps. But under OAMA, federal law would effectively make every device subject to the open-source nature of the Android project, thereby limiting consumer choice and perhaps increasing security risk. 

OAMA would also require that covered platforms provide access to “OS interfaces, development information, and hardware and software features” to developers. App stores already provide necessary information for app development, often called a software development kit (SDK). Mandating developer access in this manner could be detrimental to certain business operations. Operating systems are extremely complex and disclosing major portions of code could give malicious actors the information they need to infect devices with malware. Technology companies are left with little regulatory clarity as to what information they can protect and what must be shared, thereby making app security more difficult to the detriment of consumers.  

Another area of potential security and privacy violations involves payment systems. Current law maintains that platforms are entitled to set the publishing, purchasing, and payment processing terms of app access and in-app purchases. However, they must allow app developers to provide links to payment processing systems outside of the app store. App store legislation would force app stores to let publishers handle payment in the app with any payment system they desire. 

Both Apple and Android allow users to add certain verified payment methods for secure checkout such as Apple Pay, Google Wallet, and PayPal. These services invest heavily in anti-malware software as part of their offering.  Allowing developers to choose any payment method they like and integrate it into the app would introduce a security gap by adding software that has not been vetted.

If app stores are unable to review, approve, and deny app submissions based on payment system compliance, consumers could be subject to fraud and information theft. Even if an application developer acted in good faith in accepting these forms of alternative payment, malicious hackers could still steal consumer information by encouraging payment systems that are insecure.  

Any changes to app store ecosystems should continue to allow companies to protect consumers from non-secure software. Platforms must be able to review apps that developers submit before publishing them, choose which specifications to release, and vertically integrate to offer in-house solutions. 


App store legislation looking to protect consumer security and privacy should:

  • Allow for both open and closed approaches to App Store operations
  • Allow platforms to review apps before publishing them and set the terms of publication
  • Give platforms control over what hardware and software specifications they choose to publicly release
  • Protect a platform’s ability to require secure payment methods for app purchases


App store legislation aims to promote competition by requiring access to all apps in covered app stores so that no apps can be blocked from customers, but it fails to account for existing competition to app stores that already offer consumers these choices. Rather than promote competition, app store legislation would burden app store operators relative to their competitors. 

Progressive web apps (PWAs) have emerged as major competitors to native apps sold through app stores. Native apps take up hard drive storage on devices and need to be custom designed to work on the operating system for which they are intended. PWAs function like traditional websites in that they use their web-based structure to store necessary data, meaning that they are not downloaded to your phone. PWAs have surged in popularity because they are mostly device- and OS-agnostic, making them popular with developers who want to build a single app and make it available for any device. 

PWAs, like any application on the internet, should only be used from trusted sources because of the greater security risks they carry. However, there has been no shortage of trusted software developers and companies bringing PWAs to the market. Popular services such as Tinder, Lyft, Facebook, and many more have invested in PWAs as an alternative to conventional apps. Even Epic Games’ popular game Fortnite, which was pulled from the iOS store while a legal dispute with Apple ensued, is playable on iOS through several cloud options. PWAs may never match native apps in terms of performance, but provide a solid alternative to users seeking to use apps not found in specific app stores.  

Apple and Google app stores also face more direct competition from other popular app stores like the popular gaming marketplace Steam, Microsoft Store, and the Amazon App Store. These competitors seek to offer a functional device connected to the internet with a marketplace for applications, just like iOS and Android.  

OAMA would also arbitrarily distinguish “general purpose computing devices” (GPCDs), such as computers, laptops, and smartphones, from apparent non-GPCDs, such as gaming consoles and smartwatches. Non-GPCDs would be exempt from app store legislation regulations, yet the bill provides no reasoning as to why such similar devices should be regulated so differently. Microsoft boasts that the graphics processing unit in the new Xbox Series X is equivalent in power while being faster than many desktop PCs. Usage statistics reflect these capabilities. Only 55% of the total time spent on the Xbox is spent gaming. Roughly 30% of all time spent on Xbox is on Netflix and YouTube while another 15% is spent on other non-gaming activities. Even though nearly half of all activities on the Xbox are spent using the device as a computer more than a gaming console, it would not be subject to app store legislation regulations.

In the smartwatch market, OAMA could favor hardware makers such as Garmin over the Apple Watch and Samsung Galaxy Watch. While Garmin offers apps such as Spotify through a third-party app store, OAMA  would not classify it as a GPCD because it may lack an operating system sufficient to conduct general computing operations. Apple Watch, Galaxy Watch, and other GPCD smartwatches would be subject to more regulation only because they have a larger reach and more capabilities. Devices such as gaming consoles and smartwatches share enough capabilities with GPCDs that it makes little sense to create this distinction. The GPCD determination would not be able to keep up with the complex and rapid pace of technological innovation, and it may discourage future innovation and the development of more powerful smartwatches that could be subject to OAMA regulations as a GPCD.

While OAMA would seek to provide consumers with alternatives to Apple and Android, it fails to consider that consumers can already choose between app stores, PWAs, and other alternative software sources. These options may be gaining in popularity. In 2021, Apple saw downloads decrease year-over-year for the first time. In the long run, competition and negotiations between app stores and app developers will serve customers better than federal micromanagement of app store operations.


  • Innovations like PWAs make “blocking” an app from a device difficult, providing consumers with options no matter their operating system or device
  • App stores come in a variety of forms. Regulations should not target by size but instead focus on consumer harm, no matter how large the store
  • Most modern devices share basic computing capabilities. Arbitrary distinctions codified in statute will serve neither consumers nor innovation

Self-Preferencing and Verification 

OAMA would aim to prevent app store operators from “self-preferencing” their apps over competitors’ apps. To do so, it would prohibit ranking algorithms from considering ownership as a factor in where the app appears in search results. As a result, if a consumer searches for “maps” on an Android, Google Maps cannot be at the top of the list simply because the user is on Android’s Google Play Store. But the bill also states that application preferencing via advertising is allowed as long as the platforms disclose advertisements. Thus, it is unclear whether from potential OAMA language as to whether app store operators can place their products at the top so long as they disclose that it is advertised.

Another scenario is that the app organically rises to the top of search but without full algorithmic transparency, it may be impossible to avoid self-preferencing charges. In response, firms may choose to intentionally lower their placement in search. But doing so could lead to artificially suppressed application downloads, which could cause consumer deception and unrealized economic gains. A similar law banning self-referencing in Europe has shown that this has degraded consumers’ experience. A 2013 report from the Federal Trade Commission concluded that rather than using self-preferencing for excluding competitors, the changes Google made to its search engine were to “improve the quality of its search results and that any negative impact on actual or potential competitors was incidental.” 

Since self-preferencing is so common, it suggests that consumers find some kind of value in it. When customers search Google, they may expect to see Google Maps first because they are searching using Google. If they search Amazon, they may expect to see Amazon’s white-label products in the same way that Walmart promotes its private-label brands. However, this practice does not prevent customers from simply scrolling another inch or two to another mapping service or walking past the private label rack to choose another product. Self-preferencing is not a method for excluding competition, but rather a way for companies to provide cheaper and customer-friendly solutions for common adjacent products, something that consumers often value. 

App store legislation would also require app store platforms to provide end users with “the technical means to verify the authenticity and origin of third-party apps or app stores.” Ironically, this is what app store operators already do for consumers when an app is submitted to the store, which is often a technically difficult undertaking. For certain apps that intentionally obscure personal or identifiable information, such as Web3 and cryptographic technologies, it may be impossible for firms to verify the origin or authenticity. 

This could stifle innovation and discourage more innovative privacy app developers to forgo submissions to app stores. Under OAMA, firms could be forced to spend a large amount of time trying to track down a developer’s location or identity, diverting resources to a task that may be impossible. It is ultimately unclear what is meant by providing the “technical means” and this could open up platforms to litigation if they do not provide enough technical information and capabilities to consumers.  


  • Banning “self-preferencing” will likely create regulatory confusion and result in negative outcomes for consumers
  • Self-preferencing is widely practiced in the rest of the economy, from grocery stores to car dealerships, and is recognized by consumers
  • Most consumers cannot technically verify apps. This is a service that app stores already provide to consumers as part of the normal business operations

Consumer Preference 

Future passage of OAMA would signal a view of the app distribution models of Apple and Android as terms dictated by smartphone behemoths to leverage power in vertical markets and extract the profits of third-party app developers. But decades of competition by smartphones (alongside competition from laptop and desktop computers) suggest the native app distribution channels observed today emerged as value propositions offered by Apple and Android to consumers.

Before the 2008 origins of the app store, there were no formal software filtering services for mobile or desktop and consumers were forced to rely on third parties or word of mouth to determine if websites were safe. This resulted in major viruses taking down nearly 10% of the internet and causing billions of dollars in damage. Cyberthreats have evolved into malware, phishing, and other techniques in order to get around the tight security of app stores, but such security still provides a valuable bulwark against malware.  

Over the years since app stores debuted, there have evolved a variety of approaches to app store operations which give consumers choices in terms of price and security. The closed nature of Apple is part of an overall strategy and value proposition for consumers that is fundamental to the Apple brand. Apple consumers often cite the ease, security, and user-friendliness of this closed approach as reasons for selecting the products, and these features are central to Apple’s iconic brand.

But, relative to Apple, Android is a more open ecosystem that still exercises some central control over versions of the operating system licensed to multiple hardware manufacturers. The result is more flexibility in cost, usage, and software–often cited by Android users as a superior balance in features and flexibility than that offered by Apple. Once again, this model mirrors long-standing and ongoing competition in the market for laptop and desktop consumers.

Web apps provide users with ease of use, security, and flexibility on both iOS and Android. Unofficial but widely available products such as “jailbroken” phones are also available. Continued innovation and consumers’ ability to switch or use multiple devices and channels for apps provide a degree of competitive pressure and market discipline on both Apple and Android devices, including their channels for distributing native apps.

Neither ecosystem is static, and continued innovation and competition between the two ecosystems shape the way they distribute native apps. Both Apple and Android ecosystems have adapted to, and sometimes fueled, further changes such as in-app purchases, whose central role in gaming was likely not foreseen in the early years of touchscreen smartphones.

It would be incorrect to assume that app store legislation would result in greater third-party native app competition while leaving Apple and Android’s security features and user experiences unchanged. Apple and Android’s ecosystems would likely find other ways to maintain their market-tested approaches, albeit less efficiently, at higher costs, and possibly with higher prices to end-consumers. These unintended consequences would reduce, and potentially eclipse, any potential benefits from increased openness in the market for native third-party smartphone apps.


  • App stores are an innovation that protects consumers from malicious software
  • Consumers have a choice in the market between open systems and closed systems, each with accompanying strengths and weaknesses. 


App stores serve a valuable function to consumers by vetting software applications before they gain access to consumer devices and information. Before this, users were at much greater risk of contracting viruses on their computers and phones. While the service has been mostly offered by a handful of firms, there is nothing inherent about the app store service itself which would preclude other competitors from entering the market. Each app store has a unique approach to its offerings which gives consumers choice about what kind of app store they want to use, typically involving a tradeoff between openness and security. 

Using policy to force all major app stores onto devices could reduce consumer choice while creating major issues for security. Until there is significant and demonstrable consumer harm, not just a perceived lack of consumer choice, policymakers should continue to let app stores innovate and evolve without policy intended to force them into certain practices. 

The post The pitfalls of regulating app stores appeared first on Reason Foundation.

Can the FTC block technology mergers based on future market predictions? Mon, 19 Dec 2022 21:58:07 +0000 The bid to block Meta from acquiring Within will test the FTC’s argument that potential future concentration is enough to stall the merger.  

The post Can the FTC block technology mergers based on future market predictions?   appeared first on Reason Foundation.

The Federal Trade Commission’s (FTC) bid to block Meta Platforms, Inc. from acquiring Within, designers of the virtual reality fitness app Supernatural, began in a San Jose court on Dec. 8. The three-week hearing is expected to test the FTC’s argument that potential future concentration in the still-developing market for virtual reality fitness applications is enough to stall the merger of Meta and Within.  

Federal Trade Commission Chairwoman Lina Khan, tapped by President Joe Biden to lead the agency last year, hopes to preside over the most significant change of course for U.S. antitrust policy in decades. She and others belonging to the New Brandeisian school of antitrust advocate a more aggressive stance toward mergers than has been seen in decades. This advocacy is making the FTC’s case against Meta a case to watch as it may offer a preview of the FTC’s new strategy, as well as its potential success in court.

When Facebook rebranded as Meta in Oct. 2021, the company signaled a change in its strategic outlook. Meta began investing heavily in virtual reality (VR) technology, which is expected by many to grow rapidly over the next decade. Today, Meta has already entered markets for VR hardware, social platforms, and games. As part of that strategy, Meta announced last year that it would acquire Within, developers of the VR fitness app Supernatural, for $400 million. A large tech company acquiring a niche startup in a nascent, fast-developing market is not an unusual event. Along with fitting Meta’s strategy, startups like Within often consider such buyouts successful outcomes of their entrepreneurial ventures.  

The Federal Trade Commission’s July 2022 announcement that it was blocking the acquisition reflects the more aggressive antitrust approach Khan is taking. The FTC press release says:

The complaint alleges that Meta is a potential entrant in the virtual reality dedicated fitness app market with the required resources and a reasonable probability of building its own virtual reality app to compete in the space. But instead of entering, it chose to try buying Supernatural. Meta’s independent entry would increase consumer choice, increase innovation, spur additional competition to attract the best employees, and yield other competitive benefits. Meta’s acquisition of Within, on the other hand, would eliminate the prospect of such entry, dampening future innovation and competitive rivalry. 

This theory of harm departs significantly from the consumer welfare standard, which Khan and fellow advocates of antitrust reform blame for a more permissive stance on mergers in the past several decades. The FTC’s theory of harm to future competition in the potential market for virtual reality fitness apps applies similar logic: fewer competitors and higher market concentration lead to higher prices. But the traditional consumer welfare standard refers to actual consolidation and competition in existing, definable markets. The FTC’s future-competition theory is an exercise in speculation. 

The Federal Trade Commission’s complaint tries to define a “dedicated VR fitness app” market. Meta’s court filing in response calls that market a piece of “litigation fiction,” noting that subsequent to its initial complaint, the FTC raised its count of five competitors in that imagined market to nine. Meta’s response further states: 

Every relevant competitor who will testify – including representatives of three of the FTC’s claimed in-market apps and one that is poised to enter – will state that there are many other VR and non-VR fitness alternatives available to consumers beyond the nine cherry-picked apps that comprise the FTC’s gerrymandered market. And even the FTC’s invented market is neither oligopolistic nor even “concentrated” in any meaningful respect. It is robustly competitive with many competitors jockeying for consumers’ attention and more entering all the time. 

Meta also asserts it had no plans to create and offer its own fitness app prior to the Within deal and will call industry witnesses to testify that a self-designed VR fitness app from Meta was not widely expected. 

While the FTC’s theory of harm to future competition is perhaps plausible, court precedent requires hard evidence of an existing market mechanism by which an acquisition reduces competition. In rapidly evolving high-tech markets, the ultimate structure of a market still far from maturity is impossible to project even in broad terms, let alone up to the evidentiary standards of a U.S. court. 

Nearly all observers agree the odds of winning the case are not in the FTC’s favor.  For example, an Aug. 2022 commentary in Fortune magazine by Gary Shapiro, president and chief executive officer of the industry trade group Consumer Technology Association, called the FTC’s case against Meta “laughable.”

A recent article from The New York Times states: “Given how novel the F.T.C.’s argument is, it’s unclear if the agency will succeed in blocking Meta’s deal.”

Khan herself may tacitly agree her agency is unlikely to win the case, as it has been reported that Khan suggested at an April 2022 conference that cases should be brought to push the frontiers of current law, adding: “I’m certainly not somebody who thinks that success is marked by a 100 percent court record.”

Bringing cases virtually nobody believes the FTC can win, at significant cost to taxpayers—not to mention both large tech firms and startups—seems to be a poor organizing principle around which any presidential administration would build its competition policy.  

The New York Times article that quotes Khan on her agency not being afraid to lose cases suggests that even a losing effort against Meta may, in the long term, push public, legislative, and court opinion in a direction more favorable to blocking mergers under a future-competition theory of harm. 

However, Khan’s FTC may also have broader strategic goals in mind. The FTC has taken nine actions against mergers and acquisitions in its first year under Khan, a level of activity far above preceding administrations. These actions seem designed to test antitrust law with multiple theories of harm. The FTC attempted to block an acquisition by Illumina, makers of gene-sequencing products, of a small startup in a market where Illumina does not currently compete. And in another ongoing effort, the FTC is trying to block Microsoft (maker of Xbox consoles) from buying game developer Activision under a vertical foreclosure theory of harm that courts have generally not accepted

Perhaps Khan’s goal in bringing losing cases to court, in addition to testing specific novel theories of harm, is simply to create a chilling effect on mergers overall. If partners in potential mergers and acquisitions attach a higher probability of being challenged in court, this increase in expected cost could depress merger and acquisition activity overall.  

Khan and her allies would likely consider this a win. They often take as a starting point the fact that a permissive approach to mergers in the last 40 years has led to a dramatic increase in corporate power and that such power is harmful to workers, consumers, and other stakeholders, even potentially subverting the democratic process. (Other economists vigorously debate this premise and assert that concentration has not meaningfully risen with time.) 

Courts can render surprising verdicts, and the consensus opinion that the FTC is unlikely to win its challenge is no guarantee. But even if one assumes the case will ultimately fail, the opposition of Khan to high merger and acquisition activity overall makes the trial worth watching closely. If the court battle is lengthy, expensive, and laden with appeals, even a loss by the FTC could have a chilling effect on future mergers. But if the FTC is dealt a quick and decisive loss, it may be Khan and others like her who feel the chill. 

The post Can the FTC block technology mergers based on future market predictions?   appeared first on Reason Foundation.

Florida should learn from the mistakes of California and European privacy laws Thu, 08 Dec 2022 03:55:46 +0000 As people increasingly move their lives into the digital world, demands will inevitably grow for greater data protection rules and more restrictions on what private companies can do with this information.

The post Florida should learn from the mistakes of California and European privacy laws appeared first on Reason Foundation.

As people’s lives increasingly take place in the digital realm, concern is growing about how private companies and government entities store and use sensitive data. These anxieties have led to demands that state legislatures pass data privacy laws. In 2021, a Morning Consult poll showed 86% of Democrats and 81% of Republicans said passing a federal data privacy standard should be a priority for Congress.

Despite this rare bipartisan agreement in an increasingly polarized political climate, Congress has failed to pass such a data privacy law. Earlier this year, Rep. Frank Pallone (D-RI) introduced the American Data Privacy and Protection Act (ADPPA), which has serious flaws but is the closest Congress has ever come to enacting a federal data privacy policy.

House Speaker Nancy Pelosi (D-CA) refused to bring the bill to the floor because it did “not guarantee the same essential consumer protections” as California’s California Consumer Privacy Act (CCPA), the state’s 2018 harmful data privacy law. The ADPPA would not solve the developing state patchwork issue because it only acts as a floor for minimum required regulations where states could add additional regulations. California’s law is an example where the state regulations are heavier than the federal standard would be if ADPPA is passed. The federal standard for data privacy should instead act as a ceiling and should not be as extensive as the CCPA.

In the Senate, the ADPPA faced an equally hostile reception, with Sen. Maria Cantwell (D-WA), chair of the powerful commerce committee, refusing to hold a hearing because of her concerns surrounding “enforcement holes.” The ADPPA would require annual algorithmic assessments, which would create recurring compliance costs for firms and would also require considerable federal resources to enforce. These enforcement difficulties suggest that an entity like the government may not be in the best position to regulate something as dynamic and technical as algorithmic decision-making.

This begs the question of whether a data privacy law is needed at all. If it is, it would ideally be a bill that would address all these issues and create a reasonable data privacy standard for the country that solves the patchwork problem. But without that standard, more states may feel compelled to address privacy concerns and should be aware of pitfalls to avoid.

Since the implementation of California’s Consumer Privacy Act in 2020, four states—Colorado, Connecticut, Utah, and Virginia—have enacted their own privacy laws. Complying with a regulatory system in which data laws vary from state to state is the least efficient method for the economy. Most businesses have an online presence and more and more operate in all 50 states. The costs of regulatory compliance in this type of environment stifle competition—only businesses with sufficient capital can comply, and many smaller upstarts can’t.

For several years, it looked like Florida would join the growing number of states passing data privacy laws. Florida Gov. Ron DeSantis supported a data privacy bill in 2021, but the state legislature was split over a private right of action, which would have granted Floridians the right to sue and receive financial compensation for violations. With Florida’s 2023 legislative session approaching, it’s time to consider what a data privacy bill in Florida should look like, especially if Florida lawmakers want to avoid the mistakes of CCPA and Europe’s General Data Protection Regulation.

The most serious mistake would be including a private right of action in legislation. On the surface, allowing individuals to bring lawsuits against violators may seem like it would help hold firms accountable, but the unanticipated reality is much different. Even laws that govern more serious and personal information, such as the Health Insurance Portability and Accountability Act (HIPAA), do not include a private right of action. In other laws, like the Americans with Disabilities Act (ADA), a private right of action exists but has been significantly curtailed to reduce the number of “serial” cases abusing the ADA. If Florida passes a data privacy law with a private right of action, it would inevitably feed a cottage industry of frivolous lawsuits that trap businesses in litigation cycles, suppressing innovation and raising costs.

Burdensome data privacy regulations also stagnate innovation. For example, a Cato Institute study of the Fair Credit Reporting Act (FCRA), which regulates how credit bureaus manage consumer data, argues that because of data privacy requirements, the industry has become so tightly regulated and costly that innovation has stagnated and new entrants cannot enter the market. It is likely that only large and resource-rich firms will have the continued ability to comply with complex laws like data privacy.

Evidence from the European Union (EU) may support this claim. Two months after the EU implemented the General Data Protection Regulation (GDPR), 30% of US news sites blocked EU access due to an inability to comply. An HEC Paris study of 6,286 EU websites found a general 10 percent reduction in internet traffic, resulting in millions of lost dollars. The study also found that GDPR’s rules hurt smaller websites (10-21% drop) more than larger ones (2-9% drop), suggesting that similar to credit score regulation, data privacy regulation may help entrench current large websites while deterring entrants.

Policymakers may also consider that many consumers’ ‘rights’ commonly included in data privacy bills could eventually become regulations that negatively impact consumers. For example, the right to opt-out of the sale and sharing of data sounds simple but becomes a prescription for how websites earn revenue and handle data. Websites share consumer data with advertisers and data processing companies to generate revenue. Florida lawmakers should note that allowing users to opt-out of this transaction, the primary form of revenue for many websites, would alter the fundamental business model at the internet’s core. Some websites may shut down if forced to accept users but cannot monetize their data through advertising because users have opted out. In other cases, they may have to charge these users for previously free websites to keep servers running. Policymakers should consider these downstream impacts on consumers as they decide what data rights consumers may have.

In addition, there is certain to be confusion around what constitutes the sharing of data. For example, if a website provides a temporary interface for advertisers to determine which data segment they want to market, that could reasonably be considered sharing. However, there is no industry-accepted definition of sharing data. Therefore, when considering data privacy legislation, Florida policymakers must provide clear guidelines for what constitutes data sharing.

Data privacy can happen without such burdensome regulations. Other rights, such as the right to correction and deletion, as long as they are given appropriate curing periods, such as 90 days, can be of minimal impact. Privacy notices with continued opt-in, which prevent users from having to accept cookies every time they visit a site, can smooth the experience while providing consumers with a transparent and understandable privacy contract available at any time. Distinguishing between personally identifiable data and de-identified data can also prevent needless regulations on non-personal data.

As people increasingly move their lives into the digital world, demands will inevitably grow for greater data protection rules and more restrictions on what private companies can do with this information. However, crafting data privacy rules that balance individuals’ demands and the needs of businesses is a perilous task that either risks providing too few protections or overregulating the digital space, ultimately harming Floridians. While perilous, if the Florida state legislature pushes forward on a data privacy law, it can start to strike this balance by excluding a private right of action, limiting the right to opt-out, and providing clear guidelines for data sharing with an open and transparent privacy agreement.

Florida can do better than California and Europe’s data privacy laws, but only if lawmakers recognize the promise and perils.

The post Florida should learn from the mistakes of California and European privacy laws appeared first on Reason Foundation.

Data analysis suggests privacy legislation may make the internet less user-friendly Tue, 11 Oct 2022 20:30:00 +0000 Survey data shows that EU citizens may experience friction from the GDPR in using the Internet for simple tasks, and Americans should take note.

The post Data analysis suggests privacy legislation may make the internet less user-friendly appeared first on Reason Foundation.

At first glance, the European Union’s (EU) General Data Protection Regulation (GDPR), which took effect in 2018, is a regulation that focuses on protecting consumer privacy by mandating procedures for websites collecting and managing user information. But survey data collected after the passage of GDPR reveals that it may have been detrimental to continuously improving user experience.

When providing services in the EU, GDPR requires all publishers with websites and apps to receive user consent to visit the website or app before loading the platform for each unique visitor. The publisher must then provide an interface to manage how much data is kept, publish a dedicated cookie policy, host a separate privacy policy, and other exhaustive details.

The impact of GDPR has gone much further than changing protocols for website hosts. An analysis of 2017 and 2019 waves of CIGI – Ipsos’ international survey on attitudes towards the internet reveals that EU members might end up with slower improvements to UI (user interface) and UX (user experience) as compared to much of the world due to the unintended, real-world consequences from adopting privacy regulation as restrictive as the GDPR. For the purposes of this comparison, all countries that appear in the survey and fall under GDPR jurisdictions are labeled as GDPR. All other countries are labeled as ‘world.’

Self-reported difficulty using the internet for daily tasks, before and after GDPR.

Across the above metrics, many more non-GDPR residents reported an easier internet experience. If the label “easier” can be used as a proxy for improvement of content usability, then we may conclude usability is increasing in many parts of the world faster than it does within GDPR-regulated countries. Comparing GDPR countries between 2017 and 2019 showed some improvement, but it was not as drastic as it was for the rest of the world.

GDPR may be slowing down user experience improvements, but why would privacy legislation matter for daily internet activities?

The reshaping of digital interfaces to comply with GDPR has detrimental effects on both companies and end-users. Specifically, SMBs (small and midsized businesses) and startups suffer more because they lack the resources and staff to upgrade their systems and interfaces to comply with the law while maintaining daily operations. Some companies have shut down their services within the EU altogether because of concerns about compliance with the GDPR.

GDPR also may limit the accessibility of foreign content by European audiences, since many international firms simply do not have the resources or market incentives to adhere to the regulations. Two months after GDPR went into effect, 30% of the most popular U.S. news websites were forced to block access to the EU due to their inability to comply with the GDPR requirements, and some of these websites are still not available. The list included Pulitzer award-winning publishers like The Chicago Tribune.

These examples of domestic and international businesses eschewing the European market because of GDPR should not surprise those who are familiar with the law, as penalties are quite extreme for those who fail to comply with the regulation. For example, companies that fail to comply with GDPR may be punished with up to 4% of global sales or 20 million euros, whichever is higher.

Europe's situation is not without hope - policymakers can still look to reform GDPR to be more business and user-friendly. As states across the U.S. draft privacy regulations and talks about national data privacy legislation begin to grow, GDPR's unintended outcomes should not be ignored. European experience should be further analyzed in order to find a balance between reasonable protection of American privacy and our ability to use the internet now and in the future.

The post Data analysis suggests privacy legislation may make the internet less user-friendly appeared first on Reason Foundation.

Occupational licensing undermines some of the value of technological innovation Fri, 07 Oct 2022 19:00:00 +0000 A new study finds that occupational licensing reduces value-creation within digital marketplaces.

The post Occupational licensing undermines some of the value of technological innovation  appeared first on Reason Foundation.

Technological innovation in the form of digital marketplaces has the potential to radically improve consumer well-being through expanded choice, convenience, and access to information. But government regulations sometimes stymie that innovation in ways that are tangibly harmful to consumers.  

One particularly prevalent and pernicious form of regulation is occupational licensing. Occupational licensing is essentially a government-issued permission slip required to enter certain regulated occupations. The share of U.S. workers required to hold an occupational license has exploded from around 5% in 1950 to 25% in 2020. Many occupations within the home services industry––which employs nearly six million American workers––require an occupational license, but states vary widely in which occupations they license. 

In a recent National Bureau of Economic Research working paper, Harvard researcher Peter Q. Blair examined the effects of occupational licensing on consumer experiences with Angi’s HomeAdvisor, a popular digital marketplace platform for home services such as home repairs, maintenance, and remodeling tasks. In their analysis, Blair and his co-author examined a 2019 New Jersey law that created a new licensing requirement for pool contractors. The researchers also used national variation in state licensing requirements to assess the impact of licensing across a wider range of service tasks. 

The paper authors use the ‘accept rate’ to measure the impact of licensing. The authors define accept rate as “the likelihood that a customer engaged in search on a digital platform finds at least one worker who is legally permitted to perform the task given the licensing requirements for the task.” They found that New Jersey’s decision to license pool contractors reduced the accept rate by 16 percent. Their analysis of national variation in licensing requirements across a broader set of occupations revealed that licensing reduced the accept rate by 25 percent. In other words, in states where a license is required to perform a task, consumers are significantly less likely to find a qualified service provider relative to states that do not require a license for those same tasks.  

Their findings add to a growing body of literature on the subject which finds occupational licensing reduces the value of these digital marketplaces for consumers. Previous research has shown that occupational licensing’s effects on digital platforms do not result in higher consumer satisfaction or safety, only higher prices. Economists have broadly found that occupational licensing increases prices by limiting the supply of workers in regulated occupations. Typical requirements to obtain a license include education, training, and the payment of fees. These requirements act as a barrier to entry for many prospective workers, especially the poor, formerly incarcerated individuals, and other disadvantaged groups. Given the consistent finding that licensing does not meaningfully improve quality or consumer safety, the presence of licensing is a net loss for consumers. 

This new research, and other similar studies, are important to understanding the impact of government regulation on consumers. Occupational licensing is a perfect example of a well-intentioned policy gone awry; while policymakers may have had noble intentions of protecting consumers and ensuring quality, research has demonstrated that licensing often fails to achieve these goals. Instead, occupational licensing creates barriers to opportunity, raises prices, and, as this new NBER paper suggests, reduces the value created by technological innovations. These research findings further suggest that occupational licensing reform is necessary and that policymakers should be mindful of unintended consequences when establishing regulatory frameworks.  

The post Occupational licensing undermines some of the value of technological innovation  appeared first on Reason Foundation.

Congress aims at big tech companies but would hurt startups and innovation Fri, 09 Sep 2022 22:29:05 +0000 The bill aims to limit big tech's power, but it would actually end up limiting innovation, start-up companies, and economic growth.

The post Congress aims at big tech companies but would hurt startups and innovation appeared first on Reason Foundation.

Sen. Amy Klobuchar (D-MN) and Sen. Tom Cotton (R-AR) were among the bipartisan cosponsors of the Platform Competition and Opportunity Act (PCOA), a bill that proponents claim would limit the power of so-called big tech companies. The legislation would set a strict standard for mergers and acquisitions, drastically increasing the workload for companies and the agencies which oversee these transactions and potentially limiting the capacity for economic growth. 

Typically, an organization looking to purchase an outside company may either create an entirely new entity by buying another firm, called a “merger,” or obtain the smaller company and absorb it in an “acquisition.” There are two main types of mergers. First, a firm can execute a “killer” acquisition whereby another firm is acquired and immediately terminated. Another style is called a “nascent ” acquisition, where a large company purchases a smaller one because it desires the firm’s technology or employees.

Companies looking to combine must apply to the Federal Trade Commission and the Department of Justice to prove that a transaction will not decrease market competition. Most of these applications are successful on their first review. Only 2% of applications require additional scrutiny from regulators, called “second requests,” during which the company must prove that the deal will not harm market competition. These requests can be incredibly laborious for companies to comply with. Providing all documents can take an organization over 10 months, and a company may have to provide information like organization charts, product specifications, and employee testimony. One “model” second request asks a company to provide 112 different types of evidence, each of which must be uniquely formatted and organized.

The PCOA seeks to prevent tech companies from buying smaller companies. It would also significantly increase the current steps and standards for tech companies involved in mergers and acquisitions, lifting evidence requirements so unnecessarily and harshly that it would dramatically raise compliance and litigation costs, potentially stymying tech innovation, without helping taxpayers or consumers.

The PCOA aims at technology-related transactions worth more than $50 million dollars so it can target major technology firms such as Amazon, Apple, Facebook (Meta), and Google.  However, analysis of previous Federal Trade Commission data shows some of the bill’s flaws.

For example, Twitter acquired messaging company Quill in 2021. Twitter shut Quill down and absorbed its team, with Twitter’s direct manager for core technology tweeting that the company would integrate Quill’s technology into its direct messaging system. Although this scenario seems precisely like the type of transaction the PCOA would like to monitor or block, it would go unreported under the PCOA because Twitter bought Quill for $16 million, far below PCOA’s $50 million threshold.

Meanwhile, the PCOA would apply its strict terms to deals above the $50 million threshold, even if it seems clear that the deal wouldn’t affect market competitiveness. For example, deals like Zoom buying the customer service tech firm Five9 to help it branch out into other markets would needlessly be put under regulators’ microscopes. Similarly, Apple purchasing PA Semi, which increased semiconductor competition by helping Apple create its own chips, would be subject to intense federal scrutiny—even though these types of moves improve the market.

Passing the PCOA would also burden the government agencies that review mergers and acquisitions. The bill would require the Federal Trade Commission and Department of Justice to verify information that applicants submit, meaning that an increased applicant workload for tech companies would translate into increased regulator workloads.

The FTC has previously discussed how increasing investigation requirements hinder its ability to review transactions efficiently, noting that the expansion of information certification requirements, which began in the 1990s, continues to strain the organization. As the FTC emphasized in a report, “an unintended collateral effect [of increasingly complicated merger analysis] has been to increase the burden on the parties and the agencies.”

Stress on the FTC’s merger analysis employees has only continued to grow since then. The commission’s funding and staff have decreased yearly for the past 12 years. In 2021, the FTC had to modify its review process because it was so far behind in investigating applications. It is unlikely that the FTC could effectively handle the additional verification associated with reviewing applications under a PCOA-style system.

PCOA would create strict standards for technology mergers and acquisitions that research suggests would burden both technology companies and regulatory agencies. The bill aims to limit big tech’s power, but it would actually end up limiting innovation, start-up companies, and economic growth.

The post Congress aims at big tech companies but would hurt startups and innovation appeared first on Reason Foundation.

What the movement to break up big tech gets wrong about our digital economy Fri, 05 Aug 2022 18:30:00 +0000 The uncertainty, fast-moving innovation, and large pool of ideas that characterize online platforms make new competition inevitable. 

The post What the movement to break up big tech gets wrong about our digital economy  appeared first on Reason Foundation.

Those concerned over the size, apparent market dominance, and influence of widely-used internet platforms often focus on a “Big Four” featuring Apple, Google, Facebook, and Amazon. These and other tech companies have drawn politized attacks from the left and the right, and regulatory action from the Federal Trade Commission (FTC) and Department of Justice (DOJ). Those leading the charge on this front within the Biden administration are part of a new intellectual movement in antitrust economics: New Brandeisians.  

Legal scholar Lina Khan, among the movement’s thought leaders, now chairs the Federal Trade Commission. Her 2017 article “Amazon’s Antitrust Paradox” outlines the basic economic rationale cited by those concerned that the size of today’s leading internet platforms stifles market competition: 

For the purpose of competition policy, one of the most relevant factors of online platform markets is that they are winner-take-all. This is due largely to network effects and control over data, both of which mean that early advantages become self-reinforcing. The result is that technology platform markets will yield to dominance by a small number of firms…Network effects arise when a user’s utility from a product increases as others use the product. Since popularity compounds and is reinforcing, markets with network effects often tip towards oligopoly or monopoly. 

Published the same year as Khan (2017), antitrust economists David Evans and Richard Schmalensee distill both technical academic work and historical experience into a concise and convincing rejection of Khan’s basic argument: 

Unfortunately, the simple network effects story leads to naïve armchair theories that industries with network effects are destined to be monopolies protected by insurmountable barriers to entry, and media-friendly slogans like “winner-take-all.” 

The authors conclude that New Brandeisians have not caught up with mainstream economists’ more sophisticated understanding of network effects. They are correct, but this critique does not go far enough. The realities of internet platforms and associated technology have radically altered the competitive landscape and point toward an even stronger rejection of New Brandeisian thinking. 

Telephones and VCRs 

Evans and Schmalensee argue that economists’ “view of network effects evolved from a seminal economic contribution to a set of slogans that don’t comport with the facts.” Two of the first industries where economists identified and studied network effects, landline telephone service and VCRs, remain canonical examples of the phenomenon: 

“A telephone was useless if nobody else had one. A telephone was more valuable if a user could reach more people. Economists called this phenomenon a direct network effect; the more people connected to a network, the more valuable that network is to each person who is part of it.” 

VCRs illustrate the phenomenon of indirect network effects. Two incompatible technical standards (VHS and Betamax), “roughly comparable in cost and performance,” competed for consumers in the early market for VCRs. More consumers adopting a given standard incentivized sellers of video tapes to provide more offerings using that standard. The early industry is widely believed to have reached a tipping point in favor of VHS, which dominated the home movie market until the introduction of DVDs. 

Antitrust concerns arise in cases where “winning” firms or technical standards reach a critical mass and become locked in. Potential new entrants must build large consumer bases to become competitive, a highly risky proposition for entrepreneurs and investors alike. The result is significant market power, where the incumbent can set high prices and leverage its power in markets for complimentary products. Both direct and indirect network effects provide opportunities for anticompetitive behavior by a dominant incumbent that further hinders entrants from becoming big. 

Khan and fellow New Brandeisians such as Columbia University law professor Timothy Wu draw heavily from these basic early examples of direct and indirect network effects when they argue that online platforms are “winner take all” and market competition is an insufficient check on the power of winning firms.  

Old Models and New Reality 

Evans and Schmalensee survey later work by economists on network effects that call these basic stories into question. For example, users of a given online platform interact in many different ways, blurring the line between direct and indirect network effects and casting doubt on the idea that sheer size is a ticket to unstoppable market dominance. Facebook began as a platform specifically targeting college students. The restaurant reservation platform OpenTable succeeded when it began focusing on connecting diners and restaurants in specific cities. Evidently, there are many paths for new entrants to build large user bases, and much scope for dominant incumbents to fail to innovate and make strategic errors. 

But one can go further than Evans and Schmalensee in criticizing the New Brandeisians’ applications of early network-effect theory to today’s online platforms. A fundamental change took place when “high tech” industries went from telephones and VCRs to e-commerce, social networking, and online search. In the former cases, market entry required large investments in physical capital, such as laying telephone lines and building factories. Similarly, consumers often faced large upfront hardware costs to “join a new network,” such as buying a new VCR or telephone. 

The economics of internet platforms and many online businesses present a different competitive reality. Utilizing already-existing physical infrastructure (broadband and wireless data transmission) and user hardware (computers and smartphones), new entrants face vastly lower startup costs. Platform users face almost no upfront costs at all. In the cases of telephones and VCRs, the upfront hardware costs for consumers were so high relative to the benefits of adoption that economists often called them “switching costs.” In contrast, those reading this article may have windows currently open to Facebook, Google, and Zoom. They may switch between applications for the same function on a regular basis, or use them simultaneously for different purposes. 

Note the last name on that list. Since 2017, New Brandeisians have maintained a steady drumbeat that Facebook and Google’s user bases would prevent innovation and new entry in applications already offered on their platforms, leaving users stuck with inferior services shielded from competition. During the same period, Zoom has gone from a mostly-unknown startup to a market leader in virtual meeting platforms, eclipsing offerings from Facebook and Google. 

Evans and Schmalensee recognize the significance of reduced upfront user costs when they observe that “network effects can work in reverse.” They cite the now-famous list of once dominant platforms, such as Friendster and MySpace, that went from being portrayed as nearly unstoppable in the media to digital ghost towns in only a few years’ time. This “churn” in leading online platforms is indeed among the most salient critiques of the New Brandeisian antitrust approach.  

Evolution Beats Intelligent Design 

Network effects only “tip markets toward monopoly or oligopoly” when competitors and consumers face high up-front costs to creating and joining new networks. Many successful online ventures, including some of today’s members of the “big tech” club, began as much smaller projects by garage-sized startups and hobbyists. The primary threat of entry faced by today’s big-tech platforms is not from well-capitalized startups with business models nearly identical to the big players. The bigger threat comes from new innovators that dominant firms cannot identify and effectively fight off, often because such innovators do not yet realize they are that competitive threat. 

This dramatically different type of competition stems from the radical uncertainty of a new and still-evolving business model. This perspective is more commonly associated with Austrian economics than the mathematical models and statistical analyses forming the basis of Evans and Schmalensee’s critiques. But combining these two ideas suggests network effects in today’s digital industries may actually fuel competition over time instead of stifling it. 

The large stock of potential entrepreneurs and their ability to experiment and quickly pivot their business models to learn what consumers want and how to provide it fuels a learning process that leads to new ideas that eventually overtake the best guesses of even the sharpest big-tech CEOs.

Ever wonder why the brand names of so many of today’s tech giants have become words in common usage, such as Googling a topic, “friending” someone, or more recently, “zooming” one’s colleagues? Verbs for these platform services often did not exist before today’s large firms invented the services they provide. In most cases, these inventions were borne not from a single big idea, but learned in a process of experimenting, tinkering, and ultimately competing.  

Online platforms grow and succeed through evolution rather than intelligent design. End results are not fully planned but far more robust for precisely this reason. Today’s giants benefitted from similar competitive processes, and given the difference in the way network effects interact with the digital world’s radically different cost structure, one struggles to find a reason the process will cease to happen. 

The basic but somewhat outdated logic of the earliest network-effects industries studied by economists forms a central pillar of New Brandeisians’ aggressive stance toward big tech. On their own, mainstream antitrust economists like Evans and Schmalensee, as well as Austrian economists focused on dynamic innovation and entrepreneurship, each offer serious challenges to those who would break up today’s giants. Combining the ideas of both critics reveals the notion of “winner take all” in online platforms as unsound economic thinking. 

The Promise of Entry 

The novel competitive realities of online platforms and other e-commerce markets convincingly reject New Brandeisian thinking. But those who wish to see the behavior of big tech through rose-colored glasses or reject antitrust policy out of hand will also be disappointed. The highly important and still-evolving platform industry raises many questions, but seriously considering these questions requires dispensing with antitrust thinking that amounts to little more than applying ideas about 20th century industrial giants to 21st century tech giants. 

The evolutionary process that yields a steady stream of new and unexpected challengers is far from unique to internet-era competition. One sees echoes of these ideas in Clyde Christensen’s “The Innovators’ Dilemma,” which provides numerous examples of disruptive technologies (like DVDs versus VCRs) that dominant firms using the old technology could neither foresee nor effectively compete with. The unique cost structure and rapid pace of change in online markets is not a new phenomenon, but one that is sped up to the point old models of competition no longer apply. 

The process of rapid innovation and learning almost inevitably gives “winners” considerable market share for at least a short period of time. This, along with the unique capabilities of online platforms to serve consumers but also influence society, deserves careful thought. Antitrust economists used to speak of the idea that monopolists could be disciplined by the “threat of entry.” The uncertainty, fast-moving innovation, and large pool of ideas that characterize online platforms make new competition over time less a threat and more a promise. 

The post What the movement to break up big tech gets wrong about our digital economy  appeared first on Reason Foundation.

Social media companies and Section 230 are not to blame for Jan. 6 riot Wed, 27 Jul 2022 04:00:00 +0000 Section 230 helps protect free speech online and succeeds by rightly stating that companies should not be held responsible for the actions of their users.

The post Social media companies and Section 230 are not to blame for Jan. 6 riot appeared first on Reason Foundation.

As lawmakers continue to investigate and hold hearings on the Jan. 6 riots, some claim social media companies like Facebook should be getting a bigger share of the blame. One of Facebook’s fiercest critics on the topic is Frances Haugen, a former Facebook data scientist and product manager turned whistleblower who shared company documents with Congress and the media. CNN reported on Haugen’s claims and the documents she leaked last year: 

One of Haugen’s central allegations about the company focuses on the attack on the Capitol. In a SEC disclosure she alleges, “Facebook misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection.”

Leaked documents from Haugen first began appearing in The Wall Street Journal earlier this year. Revelations in the newspaper’s ongoing series of reports, The Facebook Files, captured the attention of lawmakers around the world. Facebook denies the premise of Haugen’s conclusions and says Haugen has cherry-picked documents to present an unfair portrayal of the company.

“The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them. We took steps to limit content that sought to delegitimize the election, including labeling candidates’ posts with the latest vote count after Mr. Trump prematurely declared victory, pausing new political advertising, and removing the original #StopTheSteal Group in November,” Facebook spokesperson Andy Stone told CNN Friday.

As the Jan. 6 committee continues its private interviews and public hearings, the focus should be on the actions of rioters and government officials. Politico reports that “more than 855 members of that crowd are facing charges that range from trespassing on restricted grounds to seditious conspiracy,” and “325 defendants have pleaded guilty to crimes stemming from the breach of the Capitol, the vast majority to misdemeanor crimes” so far.

Facebook is not ultimately accountable for what the rioters did or what President Donald Trump posted to his social media accounts after the Nov. 2020 elections or leading up to Jan. 6. In the Internet’s infancy, Congress passed the Communications Decency Act of 1996, which includes a clause known as Section 230. This clause provides crucial protections for social media and content platforms because it does not hold the companies liable for the speech that users post on websites. Essentially, just as a local water utility is not to blame for whatever a user flushes into the pipes, Section 230 says that technology platforms like Facebook are not to blame for users’ posts. 

With years and years of customer data, many social media websites believe that a somewhat moderated user feed creates happier users and ultimately more profit for the companies. Social media platforms typically view improving the user experience by limiting the amount of undesirable content in customers’ feeds as good for business. Today, most large-scale online user platforms somewhat monitor and control what their users post. 

Companies like Facebook utilize algorithms to automatically flag and remove inappropriate content and also provide users with methods to flag content for review. But Harvard University legal scholar Jeffrey Hermes summarizes the potential impact on social media companies like Facebook and YouTube if Section 230 was repealed:

Think about YouTube. [Without Section 230 protections,] Google today would need to hire people with sophisticated legal backgrounds to review every single piece of content on that site. There would not be enough hours in the day. You would need to have literally millions of lawyers whose only responsibility would be reviewing user videos.

Hermes is right. But if Section 230 were repealed, Facebook and Google would also be among the few companies with the money and resources to try some form of content moderation. In contrast, most smaller competitors couldn’t afford to defend themselves against every possible frivolous lawsuit related to every user’s posts. Most companies would likely stop letting people post to their platforms to avoid potential legal liability. In a future without Section 230, very few new competitors would have the financial strength to enter the market to compete with massive social media companies. Thus, big tech companies like Facebook could be further strengthened, and users would have fewer choices.  

In recent years, prominent lawmakers in both major political parties have called for repealing Section 230. But Section 230 helps protect free speech online and succeeds by rightly stating that companies should not be held responsible for the actions of their users. News publishers, websites, and social media platforms are all still entirely liable for the content they create. 

Jan. 6 was a terrible day, and lawmakers should continue investigating and pursuing accountability. But Section 230 and social media companies aren’t to blame. If Section 230 were weakened or repealed, it would not produce a better Internet. It would launch a slew of frivolous lawsuits and unleash massive attempts to censor online content.

The post Social media companies and Section 230 are not to blame for Jan. 6 riot appeared first on Reason Foundation.

Report says big tech monopoly claims are overblown Wed, 13 Jul 2022 04:00:00 +0000 The size of big tech companies alone should not automatically subject them to antitrust exposure.

The post Report says big tech monopoly claims are overblown appeared first on Reason Foundation.

Economist Art Laffer recently released a report analyzing three pieces of proposed antitrust legislation making their way through Congress. The study, published by the Committee to Unleash Prosperity, which is led by Laffer, Steve Forbes, and former Donald Trump advisor Stephen Moore, argues that these proposed bills falsely assume a state of entrenched market power in technology while ignoring consumer gains in the form of lower prices and better products.

First, the paper by Art Laffer and John Barrington Burke suggests that monopoly claims against firms like Amazon, Google, Facebook, Netflix, Apple, Amazon, and more could be overblown as measured by the level of firm concentration in the economy. The two measurements it cites to assess concentration are the Herfindahl–Hirschman Index (HHI) and the concentration ratio of the top four firms (CR4). 

The HHI and CR4 aren’t just theoretical concepts of market competition analysis—the Department of Justice has employed both to investigate the impacts of merger applications. Both measures do have major flaws. Concentration is not the same as monopoly power. The HHI typically measures general concentration among all firms while the CR4 measures the sales of the top four firms as a percentage of total sales in the industry. HHI shows how competitive an industry is overall, while the CR4 shows how much of an industry’s output the top four firms are responsible for. 

Between 2002 and 2017, a period of major growth for technology firms, the concentration ratio of the top four firms meaningfully shifted among tech firms. Using the North American Industry Classification System (NAICS) business code system, the concentration ratio of the top four firms in the category of software publishers and data processing, hosting, and related services decreased from 39.5% in 2002 to 35.4% in 2017. Meanwhile, other business categories, like the consumer lending industry, for example, had their CR4 go from 60% to 50% over the same time period. Thus, the consumer lending CR4 was still nearly 15% more concentrated than the information technology C4.  

U.S. Census data shows that the HHI for information technology is far below monopoly concerns and also in line with most other industries. An HHI score of around 1,500 is generally considered to represent a monopoly.  As of 2017, the latest available Census data for this report, the information technology industry had an HHI score of 239. This indicates that while tech may be more concentrated than other industries, it is far from being a market share monopoly as measured by either the CR4 or HHI.  

The most valid criticism of the study is that the NAICS code system used to track industry concentration and sales is flawed and does not capture actual market dynamics. For example, there is no NAICS code for search engines that accurately captures the shape of the market. Critics argue that if NAICS was more detailed then the data would reveal that the industry is more concentrated than the current data suggest. The two categories the study cites as being less concentrated, software publishers and data processors, contain firms like Salesforce, popular video game makers Electronic Arts, and Activision, and less known companies like Perspecta. Critics argue that the real monopolists are in other data categories such as internet publishing, information services, e-commerce, and other categories not broken out in the Laffer analysis, which hides their true market dominance. 

However, another study that used a proprietary method to stitch together NAICS data from 2002-2017 found that even other NAICS categories which contain electronic shopping, search, music publishing, taxi services, and video distribution have only seen slight increases in concentration and are still below many other industries. 

A monopoly cannot be judged by market share alone. The Laffer paper emphasizes that U.S. antitrust is judged by the consumer welfare standard, meaning that antitrust hinges on whether prices can be increased or product quality can be decreased without competition. This has not been the case in the market for digital advertising and search. The price of digital advertising has fallen by almost 30% since 2009 while online search engines have remained free. If digital advertising customers really had nowhere else to go, Google would likely be raising prices on digital advertising but that is not what the data show. In 2009, a typical digital ad would have cost $100 but in 2021 that same ad would only cost $75. This means that it costs less for most companies to connect with customers in a more efficient or cost-effective way due, in part, to improvements in services that Google has invested in. 

In 2019, Google invested $26 billion in research and development, roughly 14% of total non-manufacturing research and development private sector spending. Google has cumulatively invested over $171 billion in research and development since 2013. This research has resulted in performance output for its search product which then directly translates into more ad revenue for the company. While Google’s strong brand and market prominence in the industry would likely dissuade some potential competitors, nothing in an antitrust sense precludes investors and competitors from entering this search and advertising space.

If Google truly had monopoly status and felt no pressure from the competition, we would expect to see them invest less in research and products, raise prices, and allow their products’ quality to decrease without consequence. In terms of search, Google has maintained its market share by continually improving the product while keeping the price charged to search engine users at zero, not by using any coercive power to block out competition. 

Google has recently faced criticism and pressure for tracking user data. They have also seen competition from firms like DuckDuckGo, which promise not to track users’ data. With a fraction of Google’s money, approximately $172 million in investment, DuckDuckGo has achieved a 2.5% of search engine market share.  

The Laffer paper goes on to invoke both Moore’s law, which says that transistor capacity should cost half as much to produce every two years, and Wright’s law, which says that for every cumulative doubling of units produced, the cost to produce is reduced by about 15%. Both of these phenomena contribute to tech companies being able to produce better machine learning models and algorithms at lower and lower costs.  

Faster computer chips can process more data at one time while smarter machine learning models can make the data more intelligible and profitable. Laffer explains how artificial intelligence has improved business processes, “AI can also go beyond traditional A/B testing to make predictions about how creative will perform…by using historical data to determine what kind of colors and messaging will connect with consumers and drive sales.”  

These techniques have improved digital advertising effectiveness and also improve customer experience through better personalization and more relevant ads.  Laffer argues that this happened because internet businesses need less real estate or physical goods than traditional businesses and therefore can invest more into software applications like machine learning which continually improve efficiency at a lower cost of operation. While this has produced historically valuable companies, Laffer emphasizes that this is because they have brought equivalent value to their customers and partners.  

The paper concludes by analyzing the rise of Amazon’s Marketplace seller fees through the framework of the Laffer Curve. The Laffer Curve theory states that there is an optimal tax rate that will produce the most revenue for the government. If the rate is set at 0%, the government did not recover as many funds as it could have. But as the rate approaches 100%, at some point it will go beyond equilibrium and discourage economic activity by making business unprofitable, ultimately reducing government revenues.  

The antitrust bills in Congress claim that Amazon is leveraging monopoly power to increase marketplace seller fees to unreasonable rates. By applying the Laffer Curve theory, Laffer and Burke argue that if fees were too high we would see sellers jumping to other, more profitable options such as Shopify, Etsy, eBay, and others.  As a result, they claim, we should see Amazon’s revenue decrease because it would be “overtaxing” its economic participants. However, the study finds that this is not the case; as Amazon marketplace revenues have steadily increased over the last decade, marketplace sellers have increased, and total sales have increased, demonstrating that participants still find value despite the increased marketplace fee rate.  

In other words, the price of selling on Amazon was likely too low and Amazon is slowly discovering the true value of its service, steadily increasing it until it finds resistance and loses revenue. If Amazon continues to increase the rate, at some point sellers will find another way to sell those goods and we should see Amazon’s revenues decrease.

Contrary to the narratives used to argue against big tech companies like Google and Amazon in proposed antitrust legislation, the Laffer-Burke paper argues that these companies are the largest and most profitable companies we have ever seen, in large part, because they have innovated ways to deliver high-value products at relatively low costs by leveraging advances in digital technology. 

The size of these companies alone should not automatically subject them to antitrust claims. Lawmakers should consider the benefits that businesses and consumers enjoy as well as the ongoing competition in the technology industry. Antitrust law should be reserved for cases where clear monopoly status can be shown to harm consumers in the form of higher prices and lower quality products.

The post Report says big tech monopoly claims are overblown appeared first on Reason Foundation.

California’s misguided bill to let parents sue for social media addiction Thu, 09 Jun 2022 21:37:57 +0000 On May 23, the California Assembly passed the “Social Media Platform Duty to Children Act.” This bill would codify the existence of “social media addiction” and says that social media companies are responsible for the harms of addiction. The law … Continued

The post California’s misguided bill to let parents sue for social media addiction appeared first on Reason Foundation.

On May 23, the California Assembly passed the “Social Media Platform Duty to Children Act.” This bill would codify the existence of “social media addiction” and says that social media companies are responsible for the harms of addiction. The law would enable parents to sue social media companies for up to $25,000 should their child become “addicted.” 

The Associated Press reports

California could soon hold social media companies responsible for harming children who have become addicted to their products, permitting parents to sue platforms like Instagram and TikTok for up to $25,000 per violation under a bill that passed the state Assembly on Monday.

The bill defines “addiction” as kids under 18 who are both harmed — either physically, mentally, emotionally, developmentally or materially — and who want to stop or reduce how much time they spend on social media but they can’t because they are preoccupied or obsessed with it.

Business groups have warned that if the bill passes, social media companies would most likely cease operations for children in California rather than face the legal risk.

The proposal would only apply to social media companies that had at least $100 million in gross revenue in the past year, appearing to take aim at social media giants like Facebook and others that dominate the marketplace…

Monday’s vote is a key — but not final — step for the legislation. The bill now heads to the state Senate, where it will undergo weeks of hearings and negotiations among lawmakers and advocates. But Monday’s vote keeps the bill alive this year.

An analysis of the California bill, leaked documents, and media reporting suggests that the information on which the bill tries to establish these harms is far from settled science or data. The proposed law, Assembly Bill 2408, does little to establish why social media use in adolescents should be viewed as an addiction that must be prevented or cured rather than a cultural shift caused by technological development.

The bill is based partially on findings from an investigation by The Wall Street Journal. Done in collaboration with Facebook “whistleblower” Frances Haugen, who was a product manager at Facebook, The Wall Street Journal’s “Facebook Files” series focused on analyzing internal documents from Facebook.

One finding from The Wall Street Journal’s investigation included claims that Facebook’s management had evidence that the company’s photo-sharing social media platform Instagram harms the mental health of teenagers who utilize the social networking service. This report also suggested Facebook was aware of and studied this “problematic use.” Facebook’s understanding of the “problematic use” of its software is that such use represents “unhealthy” or “excessive” engagement with social media.

To California’s state Assembly, the leaked Facebook documents and WSJ report represented undeniable evidence that adolescent social media use can be excessive and medically harmful. The bill declares that the revelation from the Facebook Files proves that:

The largest social media platform company in the world’s own secret internal research validates both the existence of social media addiction in children and that social media addiction hurts children. As an example, in September 2021, The Wall Street Journal published a series of articles referred to as “The Facebook Files.” Those articles, citing a trove of internal documents obtained from Frances Haugen, a whistleblower, demonstrated the extent to which Facebook knew that its platforms cause significant harm to users, especially children…

A March 2020 presentation posted by Facebook researchers to Facebook’s internal message board reported that ‘[a]spects of Instagram exacerbate each other to create a perfect storm.’

But, analyzing The Wall Street Journal’s report suggests that some of the claims state legislators are relying on may not be a completely accurate depiction of social media’s impact on teens. The WSJ claims that “In one study of teens in the U.S. and U.K., Facebook found that more than 40% of Instagram users who reported feeling ‘unattractive’ said the feeling began on the app.” Yet, the real statistic in internal Facebook documents appears to be 33%).

This numerical defect alone doesn’t debunk the reporting but was not the only error within the Facebook Files. The WSJ largely failed to note that body image was, out of 12 other negative health impacts examined, the only one that teenagers said Instagram did not have a neutral or positive impact on. Other categories, such as loneliness and anxiety, had fewer than 13% and 12% of respondents, respectively, report feeling that Instagram worsened those feelings. 

The California bill also prematurely declares that “scientists, doctors, and other researchers, acknowledge the existence of social media addiction,” even though the medical community is far from unanimous agreement. A frequently-cited study from a group of researchers based in China, Hong Kong, and America, hypothesizes the existence of social media addiction and links this phenomenon to increased depressive symptoms and decreased school performance.

However, a different study from researchers at the University of Strathclyde in Scotland actually determined that social media use has few characteristics in common with medical addictions–implying that social media use cannot be an addictive behavior. Another study, by researchers at the University of Michigan and Middle Tennessee State University suggests that social media addiction if it exists, may be brought about by stress or other life issues, which would imply that social media addiction is not itself an addiction but rather a symptom of a larger sense of dissatisfaction.

While experts believe in social media addiction, the California Assembly is wrong that the issue is settled within the scientific community. The studies above show that researchers continue to disagree about social media addiction– whether it exists at all, or whether it could be merely a symptom of another problem entirely. 

Likewise, the idea that a younger generation’s embrace of new, popular technology is a sign of medical deficit is not new. There have been many technological advances throughout history that society has initially decided could be “addictive” at one point or another. For example, when the printing press made reading accessible to the public, some individuals, the most prominent being famous British writer Vicesimus Knox, began to identify and condemn “reading mania” in the late 1700s. This addiction was first identified by parents who were concerned that their children were reading excessively and wanted publishers and authors to answer for the “outbreak” that they supposedly caused. Edward Mangin’s An Essay on Light Reading from 1808 directly refers to voracious novel reading as a “deadly infection” that can cause mental and physical ailments, further supporting the parallel between attitudes towards reading in the past and attitudes about social media use today. 

When telephones became popular, some outlets condemned the younger generation’s embrace of the tool. Like social media addiction today, “telephone addiction” also had a formal definition and criteria that included being unable to be away from a phone for more than three hours without suffering “anxiety tremors.” Discussing this idea at length, author Louis Anslow highlighted an article written by columnist Ellen Goodman at The Boston Globe in 1984. Goodman claimed that “[T]he telephone purveyors bear a grave responsibility for the rapid growth of [telephone] addiction in the United States.”  A point that seems to echo California’s current sentiment toward social media companies.

New, innovative technologies almost always go through a period of rapid use from younger generations who are most likely to adopt them early. History has shown that as this initial excitement dies down, using these technologies becomes part of daily life, not a disease or addiction. For example, the concept of an individual being addicted to talking on the phone has lost serious credence in the scientific world and is typically no longer used in any context. Today, many individuals prefer to not talk on the phone at all, opting for text-based options, or even rejecting notifications in order to screen callers.

California’s proposed “Social Media Platform Duty to Children Act” relies upon a selective use of data that stirs up a false sense of panic about a promising new technology that is similar to previous panics. When compared to other relevant innovations like mass-printed books and telephones, social media displays many similarities including critics who claim that it is an addiction ruining society and that purveyors are responsible for these “illnesses.” 

The California bill would punish social media companies for innovating and create barriers that could keep new generations from benefiting from the many positive impacts these platforms have to offer. 

Parents are certainly right to monitor and be involved in how their children use social media. But before California lawmakers pass sweeping laws, they should at minimum, collect and study more data.

The post California’s misguided bill to let parents sue for social media addiction appeared first on Reason Foundation.

Social media companies are free to make bad decisions Fri, 01 Apr 2022 04:01:00 +0000 Social media companies are free to set their terms of service and moderate content as they choose. But this doesn’t mean their policies are smart.

The post Social media companies are free to make bad decisions appeared first on Reason Foundation.

Across the country, Americans are understandably concerned about free speech and issues involving social media platforms moderating and removing content, and sometimes cutting some people off from using their services. Many people, especially some conservatives, fear platforms are taking some of these steps due to political preferences.

This issue recently reappeared when, as the New York Post reported, “Twitter locked the account of a right-leaning parody site The Babylon Bee after it awarded Rachel Levine, the transgender Biden administration official, the title of ‘man of the year.’ The Babylon Bee story was a reaction to USA Today’s naming of Levine, who is US assistant secretary for health for the US Department of Health and Human Services, as one of its ‘women of the year’ last week.”

Social media companies are free to set their terms of service and moderate content as they choose. It doesn’t mean their policies are smart. In this case, Twitter’s suspension of the Babylon Bee makes Twitter appear to be oblivious to satire and comedy. We can think a social media platform’s decision is wrong and misguided — the Bee suspension is — but private companies are free to get things wrong as they try to do what they think is best for their platforms.

The best way for customers to respond to businesses that have policies we don’t like is to take our business elsewhere. One of the worst responses is to call for government regulations to force those business owners to toe our preferred lines.

The last thing anyone who cares about free speech should want are politicians and government bureaucrats deciding what can and cannot be on social media platforms. Bureaucrats and regulators would be worse at finding an appropriate balance of content moderation than private firms, and their mistakes would likely apply to all platforms. The right doesn’t want people chosen by President Joe Biden regulating all social media platforms and the left doesn’t want people chosen by former President Donald Trump choosing what social media content is and isn’t acceptable. And none of us want content rules that change whenever the political party in power changes.

Like any private business, social media platforms are based on mutually beneficial exchanges where both business and customers benefit. If either the business or a customer does not feel they’ll benefit from the exchange, they have the right to walk away. Just as we can all decide we don’t want to use a company’s services, that company can also decide it doesn’t want to provide us services. A Christian baker has the right to refuse to make a wedding cake for a gay couple. And any customers who don’t like the baker’s decisions are free to take their business elsewhere.

Indeed our own organization, Reason Foundation, has had some of its news and analysis content wrongly flagged by social media platforms. While we certainly disagreed with the platforms, we entirely support their right to choose what goes on and is shared on their platforms. And, yes, companies like Twitter, Facebook, and YouTube have developed great influence in our society, but social media companies aren’t the government. They aren’t required by the Constitution to let us speak on their platforms.

These companies face a complex task trying to figure out what customers want, how to provide it, and how to make a profit. They certainly get things wrong, as in the case of Twitter’s Babylon Bee suspension. But the gloomy alternative is the government regulating speech, which would also produce a chilling effect whereby social media companies would likely forgo any kind of controversial content to avoid regulatory sanction.  This would be a tragedy given the diversity of conversations occurring on the internet.

Beyond pleasing customers and making a profit, it is also a private business’s First Amendment right to promote whatever speech it wants to. In 1974, the Supreme Court struck down a “right of reply” law that forced newspapers to publish the response of political candidates whose records they wrote about. The law would have forced private newspapers to print speech against their will.

Similarly, in 1995, the Supreme Court struck down a law requiring a private parade to include a gay rights group because it violated “the fundamental First Amendment rule that a speaker has the autonomy to choose the content of his own message.”

Whether it is a parade, a tweet, or a Facebook post, the constitutional principle is that users have no right to force platform providers to host their speech. Rather than calling for government regulation forcing social media companies to do what politicians or certain groups want, media consumers should develop skills at evaluating the merits of information we see online and making good decisions about what social media platforms and news sources we can trust. It is up to us to determine where we go to read, view, and learn things we don’t already know and we’re free to choose which news sites and social media platforms we want to be customers of.

A version of this column previously appeared in the Daily Caller.

The post Social media companies are free to make bad decisions appeared first on Reason Foundation.

Florida’s proposed data privacy law would hurt consumers and businesses Tue, 01 Mar 2022 14:06:00 +0000 Aiming to protect Floridians’ privacy, the Florida House of Representatives is considering a data privacy bill that, unfortunately, would fail to improve consumers’ privacy while also unintentionally lowering the quality of digital products available in the state and adding tens … Continued

The post Florida’s proposed data privacy law would hurt consumers and businesses appeared first on Reason Foundation.

Aiming to protect Floridians’ privacy, the Florida House of Representatives is considering a data privacy bill that, unfortunately, would fail to improve consumers’ privacy while also unintentionally lowering the quality of digital products available in the state and adding tens of billions of dollars to the cost of doing business here. While trying to give consumers more control over their data, House Bill 9 violates several of the best practices for good consumer privacy laws outlined in a legislative checklist by Reason Foundation and the International Center for Law and Economics showing how to protect individuals’ privacy without stifling innovation.

Most notably, House Bill 9 would likely reduce consumers’ access to platforms and drive companies to change business models or create a paid category of social media. Social media platforms and apps often use the data collected from users to sell advertisements. For example, around 97 percent of Facebook’s revenue comes from advertising that is based on data that its customers have voluntarily opted into sharing in exchange for using Facebook for free. If companies were forced to give free access to customers who completely blocked the sharing of that data, some platforms would have to shift to other business models and find other revenue sources to cover their costs.  

Users are already free to choose which apps they use. The large number of consumers who willingly share their data or consider viewing ads as the “price” of using and benefiting from free digital services shows most users don’t consider themselves harmed by the data they share. It is a trade-off they are willing to make. Most people are choosing to trade some of their data for digital services, knowing they can end that relationship whenever they choose.  

House Bill 9 tries to create a space for people who want to use digital services but don’t want to share their data, but this space is already developing and evolving without state legislation. Many popular web browsers, for instance, already contain features that allow users to set tracking and privacy preferences, such as cookie retention and deletion.

Different apps and platforms choose different approaches to data privacy, which is a good thing because users can select the apps they want to use based on how much data the respective app wants from the user.  This variation and innovation is better for consumers and businesses than a state law that mandates how firms are allowed to handle data. 

Importantly, HB 9 also fails to adequately distinguish between privacy and security by including a requirement that companies must “implement reasonable security procedures” to protect data.  Privacy refers to how the data is shared legally with other companies and security addresses illegal breaches of that data or other improper releases related to technical failures and hacks.  There are already laws governing security requirements for companies that collect data, plus a robust regime of civil liability if firms negligently treat consumer data.   

Hence, most digital companies already have security protocols in place. HB 9, however, adds the new twist of a private right of action, which would allow any individual consumer to bring a lawsuit against a company for perceived data privacy harms. 

In the event of actual online data breaches, it is rarely one single individual who is harmed by a data breach but, rather, like in the Equifax breach that exposed the private information of over 100 million people, a class action is created that lets all the harmed consumers seek redress. 

The provisions in HB9 would allow lawsuits by individuals claiming violations of any of a host of very minor procedural requirements regardless of whether or not there are actual consumer harms. This could unleash a ridiculous flood of lawsuits from individuals who are trying to force digital service companies to change their policies. 

For example, the bill requires that users’ requests to opt-out or be deleted be completed in less than 10 days. Thus, users could frequently opt-in and out of data sharing with the hope that the firm does not comply with one of these requests in time, giving grounds for a suit. 

HB 9 is ripe for abuse and would greatly increase compliance and litigation costs for companies, which would likely increase the cost of using digital services for consumers. Rather than enabling lawsuits through HB 9, lawmakers should want unhappy customers to seek out other digital service companies that offer the policies they prefer. 

Rather than reducing the products available to consumers and increasing the costs of doing business in Florida, a better path forward would be for the state to take a consumer-focused approach. That approach would start by recognizing that consumers already have a choice in whether or not to use any free digital services that require data in return. Most consumers overwhelmingly do choose to do so—viewing what they gain from the apps and platforms to be as valuable as the data they’re providing. 

At the same time, Florida could work with industry standards groups to identify best practices in determining how data is collected and protected, who it is shared with, and in ways that allow online firms to maintain their current and future business models while also addressing the concerns of consumers. Any consumer privacy law should focus on creating proportional remedies for actual consumer harms when they do occur instead of mandating requirements and actions. 

The post Florida’s proposed data privacy law would hurt consumers and businesses appeared first on Reason Foundation.

Recalibrating expectations for the true potential of automated vehicles Thu, 24 Feb 2022 18:49:00 +0000 While some critics disparage the pace of AV development, experts remain optimistic about AV technology advances and the U.S.'s automated future.

The post Recalibrating expectations for the true potential of automated vehicles appeared first on Reason Foundation.

In early February, The Washington Post published an op-ed by urbanist writer David Zipper asking the question, “What exactly is the point of self-driving cars?”

While this is an important question, this particular op-ed fails to answer it. The article vacillates from pessimism on the development of automated vehicles to pessimism on future operations, leading to a framing that centers on the worst of all possible worlds. What is missing from the piece is a dose of informed realism on ongoing automated vehicles development, how AV operations might scale in the future to benefit society, and the role of public policy in this debate. Those questions and challenges are the things that automated vehicle developers and policymakers should be looking to answer in the years and decades ahead.

The hype of automated vehicles vs. reality

Zipper is correct that hype surrounding automated vehicles has been rampant in recent years, at least in certain quarters. The 2010s were a period of overpromising and under-delivering from automated vehicle developers and their marketing departments, understandably leading many journalists and politicians to come away with very inaccurate perceptions of AV progress. The media and political class then helped extend these misunderstandings to the broader public.

For example, in 2015, Kevin Roose, now a technology columnist at The New York Times, wrote that conventional human driving on public roads should soon be outlawed entirely because much-safer automated vehicles were allegedly so close to deployment at scale. Roose believed that by 2020 “we could achieve full criminalization of driving, with penalties equivalent to those you’d get for bringing a bazooka to a schoolyard.” He wrote:

By outlawing driving and facilitating a switch to autonomous vehicles, we would make a significant and lasting impact on global public health. Thousands of lives would be saved in the U.S. alone, and those people’s families would be spared unthinkable tragedy. (We would also make cities like Los Angeles and New York eminently more livable by dramatically reducing traffic, but that’s another argument.)

Congress wouldn’t need the driving ban to kick in immediately. Like the Affordable Care Act or the Dodd-Frank Act, the No More Driving Act could be phased in over a period of several years, to allow car makers to perfect their technology and achieve mass production. Perhaps in 2017, companies that produce self-driving cars could receive a tax credit, and consumers could be paid to trade in their old, human-driven cars under a “cash for clunkers”-type scheme. Low-income families may require subsidies to make the switch. In 2018, drivers would begin to receive small fines for driving on public roads. In 2019, the punishment could become more severe—perhaps $500 citations for traveling in a human-directed vehicle. And in 2020, we could achieve full criminalization of driving, with penalties equivalent to those you’d get for bringing a bazooka to a schoolyard.

Similarly, in 2012, a decade ago, Mary Cheh, chair of the Council of the District of Columbia’s transportation committee, introduced legislation to impose mileage-based user fees on automated vehicles because she believed they would soon be ubiquitous and electric, thereby blowing a large hole in D.C.’s fuel tax coffers.

Not to pick on them, Roose and Cheh are just two of many, many people who got things very wrong on the timing of automated vehicles (and Cheh’s search for a viable fuel tax alternative happened to be forward-thinking and a good policy in a non-AV context), and one can see how their inferences are logically consistent with the way many at the time were implying automated vehicles would progress.

Eric Paul Dennis, an engineer and policy analyst formerly with the Center for Automotive Research, tracked the often outlandish AV deployment promises from developer CEOs and company press releases and compiled them into a graphical timeline of broken dreams that is worth reviewing.

While companies and their marketing-driven hype pushed overly optimistic claims and timelines for self-driving cars, there were other more sober expert voices and opinions on automated vehicle deployments, which garnered far less public and political attention. For example, at the 2014 Automated Vehicle Summit of the Transportation Research Board of the National Academies (the world’s premier AV research conference), expert attendees were surveyed on forecasted deployment years for various levels of automation in various use cases. While company press releases and media coverage may have given consumers the idea that deployment was imminent, for the fully automated self-driving taxis that Zipper focuses on in his Washington Post op-ed, the median forecasted deployment year was 2030.

This survey of experts occurred while the automated vehicles hype cycle was nearing its peak. The median forecasted deployment year among AV experts is still eight years in the future and investors continue to place multi-billion-dollar bets on these technologies, so if one’s expectations match those of the experts and investors with skin in the game, there is still no reason for disappointment.

If Zipper and others had merely sought out the views of experts at the time, their current disillusionment with AV progress could likely have been avoided.

The safety case

Zipper points out that the National Highway Traffic Safety Administration (NHTSA) estimated in 2015 that “[t]he critical reason [for the crash] was assigned to the driver in an estimated 94 percent (±2.2%) of the crashes.” He argues that “NHTSA’s nuanced finding was often boiled down to ’94 percent of crashes are caused by human error’” and that “AV companies placed that 94 percent figure at the center of their marketing pitches.” Zipper last year wrote an article in The Atlantic calling this the “dangerous 94 percent myth.”

However, Zipper ignores decades of research by NHTSA and others that goes far beyond the two-page 2015 memo he singles out as the source of this claim. For instance, a 1977 NHTSA-commissioned study found that “conservatively stated, the study indicates human errors and deficiencies were a cause in at least 64% of accidents, and were probably causes in about 90-93% of accidents investigated” and further “that human factors were possibly a cause in up to 97.9% of accidents.”

The source of the 2015 claim was NHTSA’s 2008 report to Congress on the National Motor Vehicle Crash Causation Survey, which provided summary weighted crash frequency data indicating that of a total of 2,189,166 crashes, 2,041,943 involved critical pre-crash events attributed to drivers—or 93.3%. Importantly, driver error goes well beyond legally prohibited misbehavior such as driving while intoxicated or texting while driving. NHTSA’s Fatality Analysis Reporting System reveals that “lost in thought” is a major critical factor in distraction-affected crashes, so add daydreaming to the list of normal human driver behaviors that AVs will not engage in.

While it is indeed an oversimplification to solely blame driver error for 94% of crashes, the fact remains that decades of statistical analyses of crash data have consistently found human factors are critical factors in the vast majority of crashes. This fact was reaffirmed in January by the U.S. Department of Transportation’s National Roadway Safety Strategy, which stated, “The overwhelming majority of serious and fatal crashes include at least one human behavioral issue as a contributing factor.”

In his Atlantic article, Zipper minimizes this fact by arguing that many driver errors can actually be attributed to non-driver factors. His example: “The foggy weather obscured the driver’s vision; flawed traffic engineering failed to compel him to slow down as he approached the intersection; the SUV’s weight made the force of the impact much greater than a sedan’s would have been.”

But the example ultimately fails to absolve the driver because the driver failed to take reasonable care by driving 15 miles over the posted speed limit in adverse weather conditions—and would likely be liable for the crash in some form. To what degree would be a case-specific finding. This ambiguity gets at the real reason why NHTSA generally avoids blame language in its crash risk research: liability is complex and legally determined.

So, what is the crash-reduction potential of automated vehicles?

This is difficult to confidently answer with precision and, as Zipper notes, AVs might generate new types of crashes. But it is safe to assume that if AVs are deployed at scale, they will crash less and less severely than conventional vehicles or they will eventually be driven from the market by regulators and trial lawyers. The Insurance Institute for Highway Safety (IIHS) published research in 2020 suggesting strong AV crash-reduction potential. Since AVs are anticipated to be designed to follow traffic laws (despite Zipper’s reliance on a single recent Tesla misdeed to dispute this), the IIHS study’s general methodology coupled with reasonable assumptions supports a conservative estimate of AV crash-reduction potential in excess of 70%.

The economic case

Zipper largely fails to present the obvious economic arguments in favor of automated driving. The discussion of freight is relegated to a single parenthetical sentence, where he concedes “self-driving trucks on highways may be more viable than self-driving cars in cities.”

This subject deserves more contemplation. The Census Bureau’s 2017 Commodity Flow Survey estimates that trucks move $11.4 trillion worth of freight every year in the U.S. and the American Transportation Research Institute estimates that driver wages and benefits accounted for 44% of trucking costs in 2020, so it is no surprise that automated vehicles have generated intense interest in the logistics industry.

The potential impact on passenger transportation is also large. Research published in 2018 by a team of Swiss academics suggests automated driving systems have the potential to reduce taxicab operating costs by 85% in urban settings and 83% in suburban and exurban settings. In this forecast, automated taxi service costs on a passenger-mile basis would fall below the present costs of providing rail and bus transit and shared automated taxis are projected to be cheaper even than automated buses.

Automated vehicles also have the potential to significantly reduce traffic congestion through coordination with other AVs. Brookings Institution economist Clifford Winston and lawyer Quentin Karpilow modeled the economic impacts of congestion reduction in a scenario of widespread AV adoption in their 2020 book, Autonomous Vehicles: The Road to Economic Growth? They estimate that a large reduction in travel delays from AVs could raise the annual economic growth rate of the U.S. by at least one percentage point. While this might seem small, a conservative estimate would still translate to hundreds of billions of dollars in additional annual growth for the economy.

Winston and Karpilow also suggest that AVs could generate substantial private and social benefits by “improving traffic safety, health, accessibility, land use, employment, the efficiency of the U.S. transportation, and public finance.” They conclude that public policy should be reformed to remove barriers to AV development and deployment, and warn that the “failure to do so would significantly reduce the benefits of a major technological advance and could result in billion—if not trillion—dollars bills being left on the sidewalk.”

The mobility case

Zipper dismisses the appeal of automated vehicles in urban areas for those who are vision- or mobility-impaired by arguing that “in cities and suburbs, people can already call a taxi or hail an Uber.” What he ignores, as was discussed in the previous section, is the potential for taxi-style AV service costs to decline so dramatically so as to be cheaper to operate per passenger-mile than conventional alternatives. This will allow more people to take more trips to satisfy their personal preferences, whatever they may be.

This is especially important because in the United States lack of access to automobiles and transit dependence greatly limits employment and social opportunities, perpetuating poverty and other inequities. The University of Minnesota’s Access Across America series shows that in 2019, those residing in the 50 largest U.S. metro areas could, on average, access 47% of metro area jobs by automobile in 30 minutes of travel (or one hour of bidirectional daily commuting).

In contrast, just 8% of jobs were accessible by transit in 60 minutes (or two hours of bidirectional daily commuting). Even in the New York City metro area, by far the most transit-oriented American metro area and where more than 40% of total U.S. transit trips take place, drivers can access 13% of New York metro area jobs in 30 minutes versus just 14% of jobs in 60 minutes by transit. In nearly all of the U.S. for almost every possible origin-destination pair, mass transit is a much worse option than travel by car.

The 2017 National Household Travel Survey revealed low-income households are also generally transportation-poor households. Rather than seeking to limit their vehicle-miles of travel (VMT) and person trips, equity-focused public policy should support an increase in their VMT and daily trips. Automated vehicles have the potential to expand the large benefits of automobility to underserved populations without the large costs associated with private car ownership and the physical and cognitive abilities required of drivers.

Public policy challenges

Due to the sharp reduction in per-mile driving costs, one would also expect some increase in VMT. These additional VMT will generate large private benefits to travelers who could better satisfy their personal preferences, but may also generate social costs such as increased traffic congestion. Zipper worries that “[w]ithout some sort of restrictive policy like a vehicle-miles-traveled tax or decongestion [sic] pricing, overwhelmed streets could become mired in gridlock.”

A recent review of three dozen international AV modeling studies suggests that AVs might reduce VMT by as much as 29% or increase VMT by as much as 89%. Zipper is referring to what is colloquially known as the “hell scenario” in the world of AV modeling, in which large increases in VMT are coupled with no congestion mitigations in order to create nightmarish levels of gridlock.

Fortunately, the “hell scenario” that Zipper fears is implausible. In the real world, people are quite capable of adapting and assuming zero adaptive behavior in response to rising congestion is unrealistic, even if some cannot adjust their schedules or routes to avoid congestion. Adaptive behaviors include personal user behavior, such as avoiding certain areas at certain times of day known to be congested, as well as infrastructure owner-operator behavior, such as implementing congestion pricing. And if taxi-style AVs become a dominant urban travel business model, Winston and Karpilow suggest in their book that:

“…congestion pricing may become less politically objectionable, because ride-sharing travelers will be accustomed to paying a charge per use. Riders do so today, with Uber and Lyft, and the price of those services often includes additional fees (for example, surge charges or tolls) as part of the full price.”

The appropriate response to possible VMT and congestion challenges should come from infrastructure owner-operators directly managing traffic flows, not technology developers or vehicle manufacturers who at best could have a small indirect impact on traffic flow (such as through the deployment of synchronized connected and cooperative automation technology). Congestion pricing solves this problem by ensuring those who enjoy the private benefits of travel are also internalizing social costs associated with their travel.

But much more important than congestion hypotheticals are policy considerations related to near-term development and deployment. Zipper mentions Federal Motor Vehicle Safety Standard (FMVSS) exemptions, but fails to explain the two reasons why modernizing the FMVSS exemption regime is so important for emerging AV technologies. The more obvious of the two reasons is that absent an FMVSS overhaul to fully incorporate AVs in the federal auto safety regulatory ecosystem, the current limit of 2,500 exempt noncompliant vehicles per year over two years (with a potential two-year renewal) effectively prohibits the deployment of light-duty AVs with novel designs at scale.

But the other reason is arguably even more important. Because proposed FMVSS exemption reforms would still require AV developers to demonstrate that their non-compliant vehicles achieve an equivalent level of safety or better as conventional FMVSS-compliant vehicles the data and analysis supporting exemption applications would be extremely valuable to regulators as they attempt to modernize the FMVSS regime for AVs in the coming years. A recent RAND Corporation study found that the traditional analytical tools and metrics used by safety regulators are generally not suitable for emerging AV technologies. Thus, new ones will need to be developed and the FMVSS exemption process is perhaps the best way for regulators to gain insight into the various safety cases being made by AV developers.

Before demanding “convincing answer[s]” from AV developers, as Zipper suggests, we ought to appreciate that in many domains there are none. We will be dealing with a large amount of uncertainty about both AV technology and policy for some time. It is also important to keep in mind that excessive risk-aversion to AV errors generates another form of risk: if AVs do in fact reduce crash risk, even if they are not flawless, any restriction or delay caused by over-cautious public policy translates to more property damage, injuries, and deaths than would otherwise have been the case.

Making the perfect AV the enemy of the good AV would be a deadly mistake and this dangerous precautionary approach should be forcefully rejected by state and federal officials. As Aaron Wildavsky, the late political scientist who made pioneering contributions to risk management, concluded in his 1988 book Searching for Safety, “Safety results from a process of discovery. Attempting to short-circuit this competitive, evolutionary, trial and error process by wishing the end—safety—without providing the means—decentralized search—is bound to be self-defeating.”

For more discussion on these topics, see Reason Foundation’s reports on near-term AV policy recommendations for the federal and state levels.

The post Recalibrating expectations for the true potential of automated vehicles appeared first on Reason Foundation.

The SEC’s proposed exchange rule change would stifle innovation and technology growth Wed, 23 Feb 2022 16:01:00 +0000 A version of this comment was submitted to the United States Securities and Exchange Commission on Feb. 23, 2022. Reason Foundation opposes government action that would unduly stifle the development and use of innovative technologies. This proposed rule change is … Continued

The post The SEC’s proposed exchange rule change would stifle innovation and technology growth appeared first on Reason Foundation.

A version of this comment was submitted to the United States Securities and Exchange Commission on Feb. 23, 2022.

Reason Foundation opposes government action that would unduly stifle the development and use of innovative technologies. This proposed rule change is an example of such an action:

RIN 3235-AM45
Amendments to Exchange Act Rule 3b-16 Regarding the Definition of “Exchange”;
Regulation ATS for ATSs That Trade U.S. Government Securities, NMS Stocks, and Other
Securities; Regulation SCI for ATSs That Trade U.S. Treasury Securities and Agency
AGENCY: Securities and Exchange Commission.
ACTION: Proposed rule.
SUMMARY: The Securities and Exchange Commission (“Commission”) is proposing to
amend a rule which defines certain terms used in the statutory definition of “exchange” under
Section 3(a)(1) of the Securities Exchange Act of 1934 (“Exchange Act”) to include systems that
offer the use of non-firm trading interest and communication protocols to bring together buyers
and sellers of securities.

The proposal would expand the definition of a securities exchange so much that it would likely capture many forums, smart contracts, and other communications platforms that are not securities exchanges within the conventional meaning of the term but would nonetheless be exposed to reporting requirements and other regulations. This could inhibit important innovations in a dynamic and highly competitive market without providing commensurate benefits to investors or consumers. 

Previously, in order to be considered a securities exchange, a platform had to provide some way for buyers and sellers to match orders and provide firm trading interest. The Securities and Exchange Commission (SEC) defines such orders as “any firm indication of a willingness to buy or sell a security, as either principal or agent, including any bid or offer quotation, market order, limit order, or other priced order.” 

The proposed rule change would require any communications protocol for buyers and sellers that is used to express “non-firm, trading interest” to register as an exchange or a broker-dealer. This could include things like forums where sellers may post information about a security being offered for sale and where others can effectively communicate and bid on that offer, but not execute a trade; in order to execute a trade, the buyer and seller would have to use an actual exchange. Indeed, the new regulation could even impact eBay, which started facilitating transactions of Non-Fungible Tokens last year.

The SEC claims that the securities market has become “more electronic” and that communications systems have become “a preferred method for market participants to discover prices, find a counterparty, execute a trade…” while skirting all the regulations typically surrounding these types of transactions. 

To the extent that this rule change has a legitimate target, it is likely communications relating to “dark pools” of securities, where sellers and buyers can interact anonymously. Dark pools may indeed pose some risks, but they also serve a valuable market function for knowledgeable accredited investors (especially those that are large buyers and sellers), for whom open discussion of a particular security could impact the price of that security. As such, and as noted above, this is a complex securities issue and should be given much more time and attention before being changed. 

More concerning is that this rule change will capture many technologies such as forums, decentralized finance, and smart contracts which are likely not in the intended scope of the rule change. It does exclude “Web Chat Providers” because “such providers are not specifically designed to bring together buyers and sellers of securities or provide procedures or parameters for buyers and sellers to interact.”

However, it follows up by noting that if the program is designed for buyers and sellers to agree to terms of a trade, it would have to be regulated as an exchange. However, it is unclear what constitutes “specifically designed” for buyers and sellers.

This rule may have the effect of transferring what it considers “dark pool” conversations from platforms designed to handle those conversations to more general communications technologies. It is unclear why this would improve investor safety or market efficiency. 

If technologies such as decentralized finance and smart contracts were intended to be captured by the rule change, they should be explicitly considered. Since they are not, they should be explicitly excluded. If they are not excluded, the rule change will likely capture many smart contract applications that have little to do with securities and much to do with cryptography. It could entail software engineers and hobbyist developers being required to register with the FCC and to track massive amounts of data on their platforms. This would effectively put many of these people out of business, thereby stifling experimentation and innovation in a novel market. 

Furthermore, technologies like smart contracts do not always have the kinds of data the SEC reporting would require, such as the personal identity of each party. Apart from anything else, smart contracts are not actually contracts, they are rules governing the relationships between two or more computers that self-execute under specified conditions, so it would not be possible to specify the “personal identity” in most cases. Pseudonymity is often a desired characteristic of cryptography and smart contracts (a recent example would be privacy-preserving contact tracing apps).

It is unclear what legality or reporting requirements smart contracts would have under this rule change. There are also many intermediating technologies required to make smart contracts work and which would technically be captured by this rule but have nothing to do with securities, such as automated market makers. This uncertainty could become a strong disincentive for investors, developers, and firms to pursue smart-contract technology. 

There is also a first-amendment concern because the rule is so encompassing in the types of communications that it captures. It may be unconstitutional for the federal government to require such extensive tracking of free speech about speculative investments into cryptocurrency or other related technologies. 

This concern even applies if the regulation is only applied to government securities alternative trading system (ATS) providers such as BrokerTec and DealerWeb, the proposed rule has large implications necessitating a longer review. Publicly held Treasury Debt now exceeds $23.6 trillion dollars and is expected to surpass $30 trillion in 2028. Reduced liquidity in this enormous market could result in higher Treasury interest rates which would have substantial budgetary implications. 

The proposal in question is roughly 650 pages long, soliciting comments on 220 different areas. The ramifications of so many changes are likely not well understood by all parties involved and could alter the market in unpredictable or unhealthy ways. 

The post The SEC’s proposed exchange rule change would stifle innovation and technology growth appeared first on Reason Foundation.

Laserweeding could eventually eliminate the need for many chemical herbicides Tue, 15 Feb 2022 05:00:00 +0000 The advent of automated, laser-guided methods of weed control for agriculture could mean that farmers will no longer have to use dangerous herbicides or less effective natural weed control options.

The post Laserweeding could eventually eliminate the need for many chemical herbicides appeared first on Reason Foundation.

In the last few decades, new farming technologies like surveillance drones, autonomous tractors and high-tech greenhouses have greatly reduced the costs of growing food and helped increase the food supply. Looking ahead, new advancements in weed control techniques could be the next big agricultural tech breakthrough to help transform farming and improve the environment. 

The advent of automated, laser-guided methods of weed control for agriculture could mean that farmers will no longer have to use potentially dangerous herbicides or less effective natural weed control options. In light of public concern with the use of herbicides, some regions of the U.S. have banned certain chemicals used to prevent weed growth. While some farmers gladly accept mandates on herbicides and pesticides because they believe traditional chemicals to be harmful to people and/or the environment, most natural weed control methods come at a high price. Natural herbicides cannot kill the roots of weeds as effectively as more chemically-modified compounds, thus many natural pesticides require farmers to spray their crops multiple times a season—raising the costs to farmers, whereas traditional chemical methods only need one application.  

Given decreased efficiency of natural herbicides, it is unsurprising that many farmers are hesitant to embrace the shift away from the traditional chemicals used to protect their crops. Many farmers are looking for cost-effective alternatives that offer environmental protection alongside cost-reduction benefits and some entrepreneurs are stepping up to fill this market need. 

Carbon Robotics is one of several players in the automated driving farm technology industry and focuses on building purpose-driven products to automate a specific part of the farming process. The company’s new product, the Autonomous LaserWeeder, utilizes automated driving technology alongside laser beams to zap a field’s weeds—no chemicals required. 

Requiring no physical human oversight, the device can fully weed anywhere from 15-to-20 acres a day. Today’s average farm team, on the other hand, would need at least a week to treat the same acreage. The Automated Laserweeder can also work at any time of day, in any weather, and ultimately decreases the cost of running a farm.

Beyond efficiency, the Autonomous LaserWeeder is much friendlier to the overall health of a farm’s soil and plants. Some of the longest-used chemical herbicides are those which selectively kill weeds, allowing farmers to save time and labor by indiscriminately spraying all crops. Unfortunately, these herbicides that selectively kill weeds harm the farm’s soil over time, with some even sterilizing the soil. 

Although some farmers continue to use chemical-based methods and support the soil through other means, many farmers choose more natural herbicides for the sake of their soil. Since these methods are not selective towards agricultural plants and cannot kill all weeds, farms that use natural herbicides often face lower crop yields and higher labor costs overall as they willingly sacrifice some of their profits for the sake of the soil’s health.

Laserweeding technology would allow farmers to benefit from the time-saving advantages of selective herbicides by using high-resolution cameras to differentiate weeds from crops. This technology could also help farmers maintain profits while avoiding harm to the health of the soil.

Heavily reducing chemical herbicide use would likely directly translate into reduced operating costs for most farmers since these types of chemical herbicides account for nearly 30% of total farming expenses on average according to surveys.

Laserweeding may also have direct benefits for human health. Current estimates suggest that the use of pesticides in the last half-century has caused major soil depletion, leading researchers to estimate that some vegetables have lost up to 40% of their nutritional value compared to older versions of those crops. Since laserweeding would allow farmers to avoid any chemicals, over the long-term, it is likely more beneficial for the soil than even the safest herbicides. Further, by leaving the soil untouched and unmodified, laserweeding negates the need for herbicidal additives that compromise the land’s chemical integrity, benefiting both the farmer and the environment.

As more farmers learn of and gain access to this technology, it is likely many will opt to take advantage of its many benefits, and herbicides will become less and less popular on a commercial level. However, as with any new technology comes an adoption curve since many smaller farmers will likely be unable to embrace this new technology immediately or until prices decline and the products become more widely available.

In the interim, for many farmers, traditional pesticides are going to continue to be the most cost-effective way to keep their farms running. For this reason, regulators should not force farmers to immediately cease utilizing traditional chemicals. Rather than ban herbicides with harmful chemicals, policymakers should ensure that regulations don’t block or slow the development of farming technology like laserweeding.

As the American agricultural industry embraces laserweeding and other technological advancements are developed and hit the market, it could largely negate the future need for chemical herbicides. 

The post Laserweeding could eventually eliminate the need for many chemical herbicides appeared first on Reason Foundation.