#like! what are the implications of one particular nation/state/empire having access to all the power of the collective unconscious
Explore tagged Tumblr posts
Text
swan thought i have permission to paraphrase/post: dreamling as a ship posits that the only thing that eludes hob, the embodiment of britishness and all that entails, who has an unquenchable hunger for life and all the time in the world to claim it (an empire the sun can never set on, because it cannot/will not die), is the heart of his stranger - the personification of the collective subconscious. empire can devour the world but it can never colonize the dreams of every person! except that the larger fandom's insistence on him as the universal human implies what if it could. and this is not explored as or considered to be a horror.
#whiteness is always afforded the language of the human#like! what are the implications of one particular nation/state/empire having access to all the power of the collective unconscious#once more i'm reminded of the dreamling fics that used lyrics from hozier's ''eat your young'' as like. romantic.#which is WILD that song is about how capitalism eats EVERYTHING including the future#it is NOT romantic it's HORRIFYING that's the POINT#once again not putting this in the character tag i'm not trying to start fights i'm just#the NARRATIVE FUNCTION of hob matters in that every character's narrative function in sandman matters#bc DREAM is a narrative function who is also a person#so what is it doing in a broader sociocultural or even fandom ecosystem sense#if you posit that hob represents EVERYONE
28 notes
·
View notes
Text
On Hong Kong, Nationalism, and Social Upheaval: A Mirror for the 21st Century
The protests in Hong Kong have reached a sustained, fevered pitch that shows no signs of slowing. In a usually politically apathetic and downtrodden place, six months of regular outbursts of strife is nothing to shrug at. While simultaneous global protests have broken out for similar and regionally distinct reasons, one might come to the conclusion that in comparison that cosmopolitan Hong Kong has had relatively few fatal casualties but that would be sidestepping the historical trajectories these things have taken.
What people don’t seem to grasp is that this is a city that has regularly felt the convulsions of modernity for over a century, before it even became a colony. As a spread out collection of villages in the first millennium CE, salt production taxation, piracy, luxury materials like incense and pearls, and a distant central bureaucracy were already the cause of riots. In the 19th century, it would become the port and military foothold to guard British imperial interests, including the intentional import of opium to an increasingly addicted wider Chinese public, cemented by wars and the resulting in the Treaty of Nanking and Convention of Peking.
Anti-colonial sentiments would take root, but becoming a intersection of East and West afforded new, perhaps unexpected developments in its global and political position. As an emerging 20th century economic center it had a relatively free press and access to modern academic and social thought. It became host to young, revolutionary fervor as well as political, social, and economic refugees; all signs and important touchstones in hastening the demise of Qing rule. Like many previous popular uprisings in China, it was a central idea that dynastic fall and the ousting of perceived foreign or corrupt influence (read: non-Han, Western, and in particular Manchurian power) would restore the country to its Heavenly Mandate.
The first Chinese Republic that emerged and Hong Kong helped birth would be short-lived. After becoming a battleground for Pacific aggression and World War II proper, and witnessing the retreat of the young Republic to Taiwan, civil war, and their aftermath, tensions in the 1950s and 60s remained high and widespread. Colonial repression was against a backdrop of the greater unresolved ideological conflicts and their influence locally, culminating in the 1967 leftist riots. This was the last time the Emergency Regulations Ordinance was invoked in Hong Kong as homemade bombs and domestic terrorism spread, people were burned alive, murdered, or assassinated in the streets and lasted eight months. Even when the dust settled, it set off a decades-long wave of migration that changed the face of Chinese communities globally.
This was a premonition of postcolonial anxiety; who would be recognized as the legitimate governing body of a united China and what the real political implications would be, in a clash between the Republic of China, which was under White Terror martial law and the paranoid iconoclasm of the Cultural Revolution of Mainland China. The colonial government woke to a state of unleashed and organized nationalist polarization. A lease of 99 years changed from “good as forever” to serious consideration for an impending repatriation of the leased New Territories and outlying islands within a generation, growing to become the return of the entirety of Hong Kong and the Kowloon peninsula and its citizens under the 1989 Sino-British Joint Declaration.
A similar situation in Macau in 1966 would result in the beginnings of the framework for both cities to be governed under One Country, Two Systems under vaguely-defined guarantees of preservation of existing rights and freedoms and a “high degree of autonomy” in practically identical versions of Basic Law. What has become obvious is that the People’s Republic has used the norms and mechanisms of internationally recognized treaties and repatriation to achieve their goal of unification under one nation-state, without real sincerity for the stipulations and necessity of two systems. Attempting among other things to stretch historical definitions of their territory, consolidating control over all disputed territories and ultimately targetting Taiwan. Similar experiments with military and authoritarian control continue to play out in Xinjiang and Tibet as “autonomous regions”.
The vague terms, hands-off oversight on the part of foreign co-signers, and legal limitations without clear plans after their 50-year expiry have meant China has been willing to renege or manipulate the exercise of the Basic Law and its protections after the fact, both through their own central bureaucracy and direction of the Chief Executive. Over the course of the current protests they declared the 1989 Joint Declaration a “historical document” rather than a binding agreement with the co-signers and the citizens, effectively claiming sole discretion of governing their “internal affairs”.
***
This history of demonstrations, riots, and outbreaks of violence are preserved in the pop and social culture of protest and aggressive opposition tactics, as well the more recent reputation for political reticence, fear, and defeatism. Political and economic maneuvering, along with the decline of European empires after the World War II gave way to local material changes in allegiances and diplomatic relationships.
Establishment-aligned conservatives who once aligned themselves with the colonial powers, business elite, and/or Kuomintang Taiwan now find a shared nationalism and anti-progressive stance with the formerly young communists that now hold sway as the entrenched pro-Beijing camp. Pan-democrats as their political opposition are an uneasy confederation of progressives, reformists, and radicals aligned with emergent nativist, populist, separatist elements, as well as reactionaries in their obstruction of centralized Chinese authoritarianism, demands for full suffrage, democracy, and self-determination.
In 2003, protests against Article 23 brought out record numbers of people in opposition to clamping down with restrictions on the the right to free assembly and political expression using anti-subversion and national security laws and language. Failed Beijing-controlled electoral reform directly lead to Occupy Central and the Umbrella Movement in 2014, as well as the 2016 Mong Kok Fishball Revolution, effectively tanking the political establishment’s hold on popular support.
The rising political figures in the opposition and grassroots activists involved in those protests continue to experience political and legal persecution, bearing the brunt of representing the growing social and political disquiet. The Causeway Bay Books abductions and forced confessions, among other high-profile extrajudicial kidnappings, only confirmed fears of the opaque and encroaching bureaucratic and judicial systems of the PRC. Exercising freedom of speech, expression, and advocating for self-determination has resulted inconsistent legal interpretations and retributive disqualifications of candidates from standing for election or entitlement to being administered oath of office after being elected.
References to those previous conflicts keep popping up in 2019 actions against people, businesses, media, organizations, and unions that are historically associated with the People’s Republic. The 70th anniversary of the founding of the PRC and the 30th anniversary annual memorial of the Tiananmen Square massacre are just two reminders in the tortured historical litany of modern Chinese politics. Slogans like “if we burn, you burn with us” and “liberate (more accurately, restore) Hong Kong, the revolution of our times” reflect back on an awareness of the tactics and rhetoric of pre- and post-revolutionary China, reopening tensions of cultural North and South, Nationalist and Communist, foreign and local/indigenous. They also reveal a level of frustrated nostalgia and suffering that has not been addressed.
Idiosyncrasies like leaving pineapples are more clear when you understand they are idiom for the 1967 bombings; pasting and repurposing images of political figures to step on, humiliate, or deface hints at the local folk magic practice of “villain hitting” (打小人) and the targets’ presumed malevolence; while playing with the ambiguous meanings, homophones, and coded language that make Hong Kong Cantonese comic and irreverent are directly pulled from a deep and even ancient subversive streak.
The earlier rallying cries of the protests of “反送中, 抗惡法” can be read as “oppose extradition to China, fight the evil law”, while sounding similar to “oppose sending family (to the grave)”, namely the death knell of Hong Kong at the hands of China. The intent is obvious, as the phrase used for a familial tragedy also invokes the homophone taboo of gifting a clock or bell. “Hong Kongers, add oil (keep going)” has morphed from comfort and encouragement, to solidarity to “resist” and “avenge”.
***
Ghost in the Shell re-imagined the late 20th century claustrophobic architecture and social landscape of Hong Kong as a near-future dystopia, when the traumas of war, globalization, overcrowding, ideological conflict, and technological acceleration were ominously already present and barely concealed. In a striking way, Hong Kong is still that glimpse into the future: its traditional economic and cultural influence is waning, cost and quality of life are at extremes. There is high barrier of entry and lack of social mobility for the younger generation, space is at a dizzying premium, it is politically under-defined by stunted by lack of democratic and civic access, all while increasingly dependent on an economically hungry and politically-abhorrent behemoth that they helped create.
In the last two decades they’ve suffered through repeated injury to their tourism industry, health and financial crises, intense social and economic pressure, and most importantly direct existential threat to their security and freedoms from China and CCP influence in the form of electoral, policy, and extrajudicial control. They are 7 million people with defined rights of democracy and autonomy facing down 1.4 billion, controlled by a single-party authoritarian state, directly supported by one of the largest military forces on the planet. The rush to close the split from over a century apart by homogenizing and socio-economic extortion, is absolutely a collision that we are watching happen in real-time.
Hong Kong is a place of contradiction, neither truly old or new, and my interest in the ongoing protests extends beyond it being a place that I am familiar or my sentimental attachments. What we are seeing is late-stage capitalism in its decline and exaggerated for us all to see: where democratic rights and freedoms are lip-service, oligarchs and convenient political operators are the only ones with any say, and rampant authoritarianism is unchecked while claiming to be the injured party. China’s paranoia is not unfounded historically, but it is unhinged when it is the state’s coercive use of power and deep insecurity that drives these problems in the first place.
***
I’m well aware that there are actions over the course of these protests and demonstrations that have crossed the line. Violence and even vigilantism that was once shocking but distant, now grows closer. The bullets and flammables fly, the grievous injuries and accounts of abuse pile up in broader case of increasing intention to cause direct and permanent harm. The dehumanization, deaths under suspicious or unexplained circumstances, lack of accountability or responsibility on the part of police and inept government are adding insult to injury, further evidence of the primary criticism that they are neither valid or effective as an institution.
Carrie Lam and her bureaucracy’s inability to address any of the remaining demands or generate meaningful solutions can only be described as arrogance. There is no end to the current impasse as long as they continue to rely on China’s continued tepid endorsement. Her withering approval rating, the lack of real mechanisms to consider the needs of the populace, and the use of police as cover for their spoken and unspoken approval of brutal tactics are signs that they only wish to return to a superficial and ultimately, temporary normalcy.
These aren’t blanket “anti-government” protests as many conservative outlets insist on phrasing it, they are allied in their rejection of an unelected and unreceptive one. Rather than understand and resolve the root, Lam has made a hard-line declaration that the pressure of violence will not achieve further demands. In doing so she has missed yet another opportunity to reflect and reconcile, and will have to accept responsibility for prolonging this crisis further.
#history#Carrie Lam#chinese communist party#hong kong#china#protest#反送中#香港#violence#police#police brutality#chinese civil war#kuomintang#taiwan#colonialism#sino british joint declaration#treaty of nanking#convention of peking#19th century#20th century#21st century#writing#politics#逃犯條例#nationalism#identity#antielab#antiELAB
21 notes
·
View notes
Text
On Voting: Be Part of The Solution Not Part of The Problem
In a recent article in “The Week,” titled, “Confessions of an Ex-Voter,” Matthew Walther presents a perfect example of privileged arrogance and civic malfeasance. This isn't surprising because he begins the article laying out his childish, uneducated views of politics for the first twenty-three years of his life. His “American Idol” approach to politics and voting is an underlying basis for his argument about why he doesn't vote. Let's go through these reasons. (His words in bold) There are any number of reasons why I have not voted since. One is simply that I cannot manage to fulfill the minimum requirement of keeping my address up to date.
Translation: I'm so fucking lazy, I can't even be bothered to fill out a simple form and send it in. Right after telling us he doesn't know jack about politics, he informs us he is lazy, as well. It is going to be really difficult to make an argument that is taken seriously when your setup is, “I'm ignorant about the topic and lazy.” Another reason I have found for not voting is that in most cases it appears that my ballot will not make a difference.
This is a classic fallacy and not seeing the electoral forest for the trees. If a particular candidate wins an election by a wide margin, no single vote is responsible for their win or loss. The implication of this view is the only votes that “count” are the ones that decide an election by one vote. The thing is, even in elections won by a single vote, every single person who voted for the winning candidate's victory can be legitimately viewed as the “deciding vote.” The example the author uses is the presidential election results in Florida in 2000. If George W. Bush had won Florida by a single vote, then every single one of the 2,912,790 votes cast for him was equally important because they all contributed to him winning the election.
The decline of regional newspapers has made local affairs outside major metropolitan areas a matter of anagogic frustration to voters, who have only the faintest idea how and by whom most decisions are made in their states and cities. One simply accepts things as they are.
Even if this statement is true, it is nothing more than another example of the author's laziness. Yes, local news in many places is no longer disseminated by local newspapers but there are many good, online sources of information for anyone with access to a computer or a smartphone which I'm pretty sure covers the author. If you “accept things as they are,” the problem with voting isn't the process or the candidates, it is you.
But my principal reason for declining to take part in elections is moral. It involves, I suppose, a private objection to democracy itself.
Now that the blatantly nonsensical arguments have been laid out, the author finally gets down to the real reason he doesn't vote-he has a fucked up view of morality and democracy.
For most of history men and women enjoyed the luxury of knowing that the sovereign's rule was a brute fact about which nothing could be done. They went about the ordinary business of life — laboring, raising children, worshipping their creator — untroubled by futile expectations of change. Some of us continue to aspire to this happy ideal.
Go ahead, read the above paragraph a few times. Let the fucked up, idealistic view of history wash over you as you try and wrap your brain around this statement is at the crux of an argument against voting. Even if you ignore the fact that the time and situation the author longs for was the main catalyst behind the Renaissance, the formation of the U.S., and most of the progress made the past 300+ years, this is still a severely fucked up premise. The lives of the vast majority of people who lived under sovereign rule were deplorable. The problems and issues of income inequality now are nothing compared to the era the author pines for. The problem, besides not knowing a damn thing about history, is with the author's last word in the paragraph-”ideal.” His entire view of history and voting is idealistic in the dangerous, mythologized, untethered from reality way.
Popular elections are a recent phenomenon in human affairs. I do not expect the illusion that there is something nobler about choosing leaders than inheriting them to hold sway over our imaginations forever. The neoliberal economic consensus that has united both of our major political parties, and indeed most politicians in the industrialized world, is a more powerful force than democracy.
There is a lot to unpacked from this word salad of nonsense. First, “popular elections are a recent phenomenon...” Yes. So too are human rights, safe drinking water, indoor plumbing, television, the internet... The notion that just because something is relatively new makes it bad or a passing fancy is fallacious.
“The neoliberal economic consensus that has united both of our major political parties...” Of course, someone who has proudly expressed their laziness and ignorance tosses out “neoliberal” as a cudgel. Then, to make matters worse goes full, “both sides are to blame.” One political party has been obsessively devoted to supply-side economics and the other has not. One political party believes in unfettered capitalism and the other believes in capitalism with restrictions. One political party believes in large tax cuts for the wealthy and corporations and the other in taxing these to fund social programs. Just because someone uses, “neoliberal,” doesn't mean they understand it in the slightest.
In a bland way I hope to see Republicans control various state and national offices. This arises from a single issue: the legality of abortion.
The author has already professed his Libertarian bent and is a writer at Libertarian, Dudebro Central-The Federalist. Yet, when it comes to standing up for and defending freedoms and individual choices, he is 100% against women being eligible for either. So, besides being ignorant and lazy, the author is a misogynist and a hypocrite. The capriciousness of his (Trump) decisions, the hideousness of his conduct, and the visible descent of his mind and body into a ribald senescence are easier to bear if one sees him as a decadent potentate late in the decline of an empire... I have neither the power nor the will to alter the reality of Trump's presidency.
Earlier, the author claims the number one reason he doesn't vote is based on some sense of morality. Now, after laying out the immoral actions of Trump, he suddenly doesn't give a fuck about his morals. At least the author is consistent with his laziness. If you think you can't change politics in a representative democracy, you don't understand the meaning of “representative” or “democracy.” If you don't have the will to alter the reality of what the author labels as, “hideous conduct...ribald senescence...decadence...,” then the problem is you.
I like to imagine that my disinterest allows me to see things more clearly than partisans, but even if this is not so it certainly makes me happier.
Since the author starts the article with seriously faulty premises, he might as well end with one. His argument at the conclusion is basically, “Since I don't know a damn thing about history, politics, democracy... I see things related to these more clearly.” “Since I didn't go to med school or study surgical techniques, but I write about them, I see things in the operating room more clearly.” You don't get to state you are lazy, uninterested in the process and outcomes, then claim you see the things you are lazy, uninterested in, and really don't give a fuck about, “more clearly.”
This kind of self-congratulatory attitude about being lazy and ignorant about politics is bad enough. When it is used to discourage others from participating in the democratic process, it is dangerous because it feeds the very problem that makes America less democratic-voter apathy.
Voting is an individual right but a social obligation. To not vote, to argue that voting isn't worth it, that not voting is the moral thing to do... is the very definition of social negligence and unethical behavior. Of course, you should vote for the issues that are important to you and for the candidates you think best reflect these values. However, if there aren't candidates who perfectly match what you want, it is still ethically necessary to vote for the person you think will do the most good because, unlike what the author claims, a single vote can make the difference between millions having health care, millions having better wages, millions having clean drinking water... If you think not voting for moral reasons outweighs these kinds of benefits, your moral compass is severely fucked up.
It has been suggested to me this article was written almost as satire, to be provocative. If it was, it was executed very poorly. Also, if it was written as satire because it was done so badly, it comes across as serious and in so doing, adds to what is already a dangerous problem-voter apathy. There are far too many people who honestly believe the things written in this article. There are a lot more coming of voting age who already believe the propaganda both major political parties don't care about them. To write or say anything that feeds this apathy, this attitude is unconscionable.
16 notes
·
View notes
Text
The LawBytes Podcast, Episode 22: Navigating Intermediary Liability for the Internet – A Conversation with Daphne Keller
The question of what responsibility should lie with Internet platforms for the content they host that is posted by their users has been the subject of debate around in the world as politicians, regulators, and the broader public seek to navigate policy choices to combat harmful speech that have implications for freedom of expression, online harms, competition, and innovation. To help sort through the policy options, Daphne Keller, the Director of Intermediary Liability at Stanford’s Center for Internet and Society, joins the podcast this week. She recently posted an excellent article on the Balkinization blog that provided a helpful guide to intermediary liability law making and agreed to chat about how policy makers can adjust the dials on new rules to best reflect national goals.
The podcast can be downloaded here and is embedded below. The transcript is posted at the bottom of this post or can be accessed here. Subscribe to the podcast via Apple Podcast, Google Play, Spotify or the RSS feed. Updates on the podcast on Twitter at @Lawbytespod.
Episode Notes:
Keller, Build Your Own Intermediary Liability Law: A Kit for Policy Wonks of All Ages
Credits: Standing Committee on Industry, Science and Technology, May 28, 2019
Transcript:
LawBytes Podcast – Episode 22 transcript powered by Sonix—the best audio to text transcription service
LawBytes Podcast – Episode 22 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best way to convert your audio to text in 2019.
Michael Geist: This is Law Bytes, a podcast with Michael Geist.
Dan Albas: Minister, number nine of your guidelines: free from hate and vile extremism. One, the prime minister has been obviously talking a lot, refers to protecting Canadians from hate, violent extremism as well as disinformation. Now, I believe no one here defends hate speech and all Canadians deserve to feel safe in their communities and online. My question is, how will you enforce this measure? How will you monitor these platforms while also protecting free speech?
Navdeep Bains: So free speech is absolutely essential. It’s part of our charter rights and freedom. This is why I became a liberal. And this is really core to our democracy and what it means to be Canadian. But at the same time, there’s clear limitations to that when it comes to hate, for example. And we see newspapers and broadcasters that hold themselves to account when it comes to not spewing that kind of hate on their platforms. So clearly, these digital platforms that have emerged also have a responsibility. We all are very aware of the 51 individuals that were killed in New Zealand, in Christchurch. And that really prompted this call to action where the prime minister was at Paris to say platforms need to step up. If they had the technology, if they had the ability to bring people together, to connect people and then investing in A.I. and all these different technologies, they need to deploy those technologies to prevent those platforms from being used as means to disseminate extremism, terrorism or hate. And so that’s what we’re trying to do with the government as a government is really apply pressure to these platforms to hold them to account. And those platforms recognize they need to step up as well. And that’s one key mechanism of how we want to deal with this.
Michael Geist: The debates over intermediary liability, which focus on what responsibility should lie with Internet platforms and service providers for the content they host that’s posted by their users has been taking place around the world in Parliament’s op ed pages and the broader public debate. Much like the exchange you just heard between Canadian Conservative MP Dan Albas and Innovation, Science and Economic Development Minister Navdeep Bains from earlier this spring, there are no easy answers with policy choices that have implications for freedom of expression, online harms, competition and innovation. To help sort through the policy options and their implications, I’m joined on the podcast this week by Daphne Keller, the director of Intermediary Liability at Stanford Center for Internet and Society. Daphne, who served as associate general counsel for Google until 2015, worked on groundbreaking intermediary liability litigation and legislation around the world while at the company, and her work at Stanford focuses on platform regulation and Internet users rights. She recently posted an excellent article on the Balkanization blog that provided a helpful guide to intermediary liability lawmaking and agreed to chat about how policymakers can adjust the dials on new rules to best reflect national goals.
Michael Geist: Daphne, thanks so much for joining me on the podcast.
Daphne Keller: Thank you. It’s good to be here.
Michael Geist: Great. So as you know, there’s been a lot of momentum lately towards regulating online speech and establishing more liability for the large Internet platforms. That’s an issue that we’ve been grappling with, really, I think, since the popularization of the Internet. Back in the 1990s. But today there seems to be more and more countries expressing concern about online harms and looking especially to the large platforms the Google and Facebook and others to do more with real legal and regulatory threats if they don’t. So before we get into some of the challenges inherent with this kind of do something demands. I thought we could set the groundwork a little bit from a couple of perspectives, both with the law says now and with the platforms have been doing. Why don’t we start with the laws and recognizing their differences, of course, between lots of different countries. Where have we been for the last number of years anyway, even going back a couple of decades with respect to some of these liability questions?
Daphne Keller: Well, a lot of countries never enacted Internet specific content liability laws. So depending where you are in the world, it might be that these things get resolved just based on existing defamation laws or existing copyright law. But in the U.S. and the European Union, the law has been relatively stable going back two decades-ish. In the US, we’ve had two very major statutes that occupy almost the whole field. We have the DMCA Digital Millennium Copyright Act for copyright, and that sets out a sort of detailed takedown process with a lot of prescriptive steps. And then the other major U.S. law is Communications Decency Act 230, generally known as CDA 230, which is a very broad immunity for most other kinds of claims for anything that’s not intellectual property or a federal crime. So things like defamation or invasion of privacy claims, the platforms are just immunized. In Europe – is it useful if I go into some detail about Europe or is that wandering off topic for you?
Michael Geist: I think it’s really useful because from a Canadian perspective in particular, we’ve on the one hand got now the USMCA, That seems to put some of the U.S. rules in place in Canada, at least at a high level, via trade. But at the same time, I don’t think there’s any question but that what’s been taking place in Europe is influencing a lot of the thinking amongst some Canadian politicians.
Daphne Keller: Yeah, okay. So the main law on platform liability at the EU level is the e-commerce directive, which was passed in 2000 and it’s implemented in different member state laws slightly differently. But the basic concept is you get limited immunity if you are a certain kind of intermediary. So you have to be hosting or caching or transit provider. So it’s a little bit of a funny immunity in that it’s not clear if it covers search engines or some other things you might expect to be covered by intermediary liability protections. But if you’re eligible for those safe harbors, the rule is basically you have to take down unlawful content if you know about it. However, the member states and the courts implement the law. They can’t compel you to go out and proactively monitor. It’s just a reactive, knowledge based obligation. And that I think has had some some real shortcomings just because it lacks a lot of the procedural protections that you see in something like the DMCA, where, for example, the person who’s being accused of copyright infringement is supposed to be able to get a notice that it’s happened and be able to challenge it and so forth. There isn’t that kind of detail in most European laws. And so platforms have an even greater incentive to just take an accuser’s word for it and go ahead and take content down, even if it’s not at all clear that it’s illegal because it’s much safer to take things down and avoid risk for yourself. And you know what empirical data we have shows this happening shows lots and lots of unfounded allegations and lots and lots of erroneous takedowns.
Michael Geist: Right. I mean, I think the situation sometimes can be somewhat similar in Canada, where without some of the clear cut procedural safeguards faced with the question of what might be unlawful content or might not be, sometimes large platforms may err on the side of taking down just because it’s simpler to do that. So we’ve got large platforms having some amount of protections, safe harbors in both the major jurisdictions, stronger procedural protections such that perhaps, I suppose the bias is more to leave up in the United States unless the process is met, whereas in Europe that may reverse. How how do the companies handle some of those kinds of differences? Is it as simple as in the US they’re more likely to err to leave content online and in Europe and perhaps similar countries without those procedural protections they’re more likely to take things down.
Daphne Keller: Certainly, I mean that if if we’re talking about the big platforms like the Facebook and Googles and Twitters of the world, they all have nationally specific versions that are targeted to users in a particular jurisdiction and often are optimized for them in ways that are about commercial success. There will be a Google Doodle that’s relevant for a local holiday that’s shown just in that country, for example, but also by having different versions of the service for different countries, you can sort of sandbox legal compliance and say, OK, we’ve established that this content is illegal in France, so we’re going to take it down from the French version of the service, but we’re not going to apply French law globally.
Michael Geist: Right. So that, of course, gets us into the question of things like the Equustek case that we had in Canada, where you get a single country like Canada trying to make those decisions not just for its own citizens, but effectively for others by a court order. But we’ll park that for the moment and stick to it, because if this stuff gets so complicated so quickly and I guess stick, stick primarily to the the pressure for more regulation, the sense that somehow the rules, as you’ve just articulated, are at least in the minds of certainly some politicians, and we certainly see it as part of the discourse not good enough. That erring even on one side or the other still has left us, in a sense with a certain amount of harm online. And I think there’s a greater concern and appreciation for that. So there is unquestionably mounting pressure to do more from a regulatory perspective as a way of requiring, in effect, these large platforms to do more. Now, you’ve been really prolific on the issue and written all different kinds of things on it, but it was a piece on the blog Balkanization that really crystallized it for me because it highlights the challenges of intermediary liability laws. I guess as a starting point, what are we often trying to balance when it comes to these laws?
Daphne Keller: So there are generally three goals that legislatures are trying to balance. One is to prevent harm. So to take down content if it’s defaming someone or if it’s a movie piracy or, you know, causing harm. Another is to protect free expression. And obviously, there’s this tradeoff where the platforms are very afraid of liability. They’re likely to err on the side of taking things down and so controversial speech gets suppressed and so forth. And then the third goal is protecting technical innovation and economic growth that can come with it. So, you know, if you are a small startup, it’s really important to have immunities and, you know, know that you’re not going to be required, for example, to build a 100 million dollar content filter, because right now, at least in the US and in Europe, if you start a new platform and you want to compete with Facebook or compete with Twitter, you can know with relative certainty what kind of legal exposure you’re setting yourself up for and what it is you’re going to have to take down and potentially pay lawyers for. But but if that becomes less certain, then it’s harder and harder for small companies to enter the market and for people to experiment with new technologies. So just to recap, the three goals being balanced are harm prevention, free expression protection, and innovation.
Michael Geist: Sure. And I guess before we get into sort of how how you move some of those dials with those three three goals, I’m going to assume that many countries will look at each of those three policy objectives somewhat differently. Some may have constitutional norms that provide very strong protections on the freedom of expression side and are more willing to give, let’s say, on the innovation side.
Daphne Keller: Yeah, absolutely. And that kind of manifests in two ways. One is that some countries prohibit more speech than others. So, you know, they strike a balance. For example, that protects privacy more and sacrifices free expression and exchange or vice versa. But also that manifests in how countries set up their platform liability rules, you know, whatever it is that you are prohibiting. Your platform liability rules are going to need platforms to err in one side or the other. And so if you are starting from less speech protective goals, then maybe you’re more tolerant of a rule that’s going to lead platforms to take down a little bit too much speech or a lot too much speech.
Michael Geist: Right. And certainly we’ve seen, at least in some places, perhaps with or without some of the constitutional norms around freedom of expression, there’s been certainly, at least lately, a great deal of emphasis on the harm side.
Daphne Keller: Absolutely.
Michael Geist: And if that’s the priority, then, you know, if if we’re if we’re kind of trying to deal with each of these three things, there may be real implications, I think is what you’re getting at. Ultimately, you either for fostering innovative competitors in this space or for the safeguards around freedom of expression.
Daphne Keller: Yeah. And I think right now we’re seeing a big tendency for policies from Europe to get exported to the rest of the world, either via other countries adopting similar laws or via platforms taking European law and just applying it globally. But that’s kind of problematic, not just because of the conflict with United States law, which is what you hear about the most, but because of the conflict with a lot of other countries’ laws. If you compare in human rights law, the European Convention or the EU charter, they prioritize some things like privacy and personality rights protection relatively high compared to the Inter-American convention, which explicitly is set up to prioritize free expression more highly.
Michael Geist: Right. And we we ran into some of those questions in Canada last year around Web site blocking related issues where again, it was free of expression versus copyright versus privacy versus even net neutrality type issues. And you’ve got to grapple with each of those kinds of competing objectives. Why don’t we stay for a moment with the implications for freedom of expression around this? Because they’re at least as part of the discourse lately, there’s been a tendency to amongst some certainly to sideline that, to sort of say, well, listen. Of course, it may have some implications, but we’re now focused more on the harm. As we start getting into intermediary liability type rules, what ultimately are some of the real implications for the negative of potential negative effects, I suppose, for freedom of expression?
Daphne Keller: Well, I mean, already we see things like governments abusing copyright takedown systems to suppress criticism. The Government of Ecuador got caught using DMCA requests to try to take down police brutality videos and critical journalism. So, you know, even with the systems that we have now, there’s a lot of opportunity for abuse. Sometimes it silences really important political speech. Other times, the abusive takedown requests are like one commercial competitor trying to silence another, which is also a problem. But then there’s just that there’s there’s a lot of room for important speech to disappear.
Daphne Keller: The maybe most politically consequential shift that I see right now is the tremendous emphasis in Europe and in some other regions on terrorist content, because I think as platforms err on the side of taking down too much to be safe, the thing that’s kind of adjacent to so-called terrorist content is likely to be political speech about tough issues, you know, about American military policy in the Middle East or about immigration policy in Europe. And so that sort of erring on the side of taking down too much when what you’re looking for is potentially violent extremist supporting speech, threatens some really important stuff.
Michael Geist: That’s interesting. I mean, in Canada, we’ve largely avoided the takedown rules and copyright that you referenced. Successive governments have in a sense, I think looked at the experience elsewhere and seen some of these kinds of implications, such as the Ecuadorian example that you just provided and largely avoided adopting that, though many of those platforms that, of course, were very popular in Canada still use effectively take down systems. So Canadians find themselves subject to it at a certain level, even if it isn’t found within our laws. It’s striking to talk about sort of some of these decisions and the removal of content. What role, if any, do the courts play in all of this or is this just it falls to the platforms and they are the ones making these calls?
Daphne Keller: Well, it depends where you are. There are some very interesting rulings internationally saying the courts have to be involved in in some countries. So in Argentina, the Supreme Court ruled that for for most kinds of content, a platform doesn’t have any legal obligation to take it down until a court has looked at it and given it a full and fair due process and adjudicated that it’s illegal because they didn’t want to put platforms in the decisions of being the arbiters of speech rules. There is a similar ruling from the Supreme Court of India saying you need an adequate government authority to decide what’s illegal and you shouldn’t put it in the hands of platforms. That, of course, isn’t how it has worked in the US with copyright or in Europe with that that knowledge base takedown systems that they have. And that created a sort of asymmetry in the access to remedies for the people who are affected by takedowns. If you’re somebody who is a victim of defamation or a rights holder whose copyright is being infringed and a platform doesn’t do what you want, you can sue them. And here you can take it to court and get your rights enforced. But if you’re someone who’s an online speaker and you have been wrongly silenced by a false accusation or an error, in most countries, you don’t have standing to go to court and challenge that. So there isn’t a way to correct the errors of over removal. There’s only a way using courts to correct the errors under removal.
Michael Geist: I mean, it’s it is for those that are accustomed to seeing your due process as a core part of protecting freedom of expression, the notion that we would ultimately leave to large platforms these decisions, can be pretty frightening. And it was again, the site blocking issue in Canada, the proposal that was put forward was one that did not involve direct court oversight, which was one, one or a part where the real concerns lay. When you start vesting so much responsibility in these platforms to make these kinds of decisions. There are those that say that’s that’s appropriate in part because they are increasingly likening the platforms to publishers and saying is this sure looks a lot like a conventional publisher, shouldn’t they have the same kind of responsibility? What are some of the implications as you see it, as of treating large Internet platforms as akin to a conventional publisher?
Daphne Keller: Well, I think it would be impossible for them to function if they were treated like publishers. Publishers do pre publication review of the editorials that they put up or the, you know, TV shows that they air. And if there is something controversial in there, they have a lawyer look at it and decide if it’s legal. You can’t layer a process like that on top of Twitter or Facebook. You know, what are they supposed to do? Hold all of our tweets while they have their legal team, evaluate them. Just. There isn’t a model where truly publisher-like legal responsibility can be put on platforms, but we still get to post things instantaneously and communicate and have a soapbox or talk to our friends, you know, all of the uses that we value that comes from having an instantaneous communication platform on the Internet, depend on those intermediaries is not having to carry with you everything we say.
Michael Geist: I mean, that does highlight the particular challenge that I know you that you’ve seen. I see it at least one of the Internet content moderation at scale conferences. When you start getting into just the sheer amount of content that exists and what it ultimately means to put responsibility on a platform potentially to vet all of that, even to not vet it, even to try to deal with all of it is something that we haven’t really seen. I think really before in publishing or content history. It’s everybody having the opportunity in a sense to speak and using these platforms to do it. What are some of the implications if you if you move towards almost a one size fits all type approach saying that we are going to have this requirement, whether it’s vetting beforehand or even take action after given the scope and size of what’s taking place. If we treat the Facebook as akin to know other other platforms or large sites that have a lot of user generated content out there, the Wikipedia’s or Reddit’s of the world.
Daphne Keller: Yeah. Well, I mean I do want to be clear that I’m not saying our only choices are give them complete immunity or, you know, lose the Internet. That is the point of the Balkanization piece is there are a lot of knobs and dials you can turn in the laws. You could have an accelerated TRO process to get something taken down or you could have some kinds of content where we do expect platforms to know it when they see it and take it down and others where you wait for a court, which is what the law de facto does anyway right now. You know, platforms even in the US have to take down child sex abuse material immediately if they see it. They’re not supposed to wait for a court to assess it. But the rule is very different for defamation, you know, where it’s often very difficult to know the correct legal assessment. So, you know, just with that background that I don’t think we we need an all or nothing system and I’m not saying lawmakers in the 90s got things perfect and we should never re-ask any questions. But whatever the obligations are that we put on platforms, the kinds of things we might reasonably ask Facebook or YouTube to do are very different from the kinds of things we might reasonably ask a small local blog or a two person company developing a chat app or, you know, smaller competitors to do.
Daphne Keller: And I think lawmakers are often falling into a trap where they say we need to regulate platforms. And what they have in mind is Facebook and YouTube and they know that YouTube can do things like spend one hundred million dollars developing a copyright filter and they know that Facebook can do things like hire is at 20 or 30 thousand people at this point to do content, moderation. You know, they just sort of really move mountains and put tremendous resources into this. And so they craft laws accordingly. They say, well, platforms should have to filter, platforms should have to have very rapid human review when they’re notified that something is unlawful. And that’s tolerable for Google and Facebook. I think those laws could you are very likely to change the major platforms so that they take down a lot more lawful speech, but they’re not going to go out of business. But if you are Medium or Automatic or even Pinterest or Reddit. Reddit has 500 employees. They don’t have a multi-tens of thousands of people moderation team. And so the kinds of rules that might plausibly be imposed on very large platforms just won’t work for small platforms.
Michael Geist: No, I think that’s a good point. And I think we’re certainly we’re law we’re sort of past the prospect of saying it’s there are no rules out there. I think you’ve highlighted it. There’s there always have been some and in some places there’s been an expectation of even more aggressive take down and moderation. But it’s clear we’re moving more and more. The question is, I think, as you’ve put it, how you adjust the dial at a certain level. One of the things that is was striking to me is how much emphasis there has been on the platform responsibility for harmful speech, let’s say, as opposed to the focus on individuals themselves. So, you know, in the aftermath of Christchurch, for example, terrible event where almost all the focus seemed to be on what Facebook did or didn’t do or YouTube did or didn’t do, as opposed to the individual who did this and other people around that that might have been doing this. Do you have thoughts on what we might do to not just focus on platform responsibility here, but individual responsibility as well for where there are people purveying hate or engaging in things that are illegal under various laws.
Daphne Keller: Yeah, I think the focus on platforms is on the one hand understandable because they represent a choke point. You know, like they can shut down a lot of individuals in situations where it’s hard for plaintiffs and law enforcement to go find those individuals. But they’re a pretty bad choke point because they won’t stand up for the individual speakers interests outside of, you know, relatively special circumstances. But on the other hand, focusing on the platforms really risks failing to address the underlying issues. And this we’ve seen this in the EU terrorism context. There’s been tremendous energy put into making platforms take down videos that are recruitment videos or terrorist violence videos. And then when civil society organizations in Europe have asked the police, well, how many of those uploader is did you go try to find or how many of the video creators did you prosecute? How many actual investigations came out of this? There don’t seem to be a lot of a lot of efforts being put in that direction. And so, you know, it’s not that all but all enforcement should move off of platforms and onto individuals. But it certainly is the case that focusing so excessively on platforms is missing out on really important pieces of solving the problem.
Daphne Keller: The other I mean, for many cases, there is another complication here, as you know well from from the copyright context and from other contexts where you work, which is online, speakers who are sharing illegal content are often anonymous. And so if we say the law should go after the speakers more, you know, that starts inviting lawmakers to strip away at anonymity rights or propose that platform should have to retain the real I.D. of people who post content. So, you know, there are huge policy tradeoffs in any direction there.
Michael Geist: Yeah. It’s I think it’s really striking just how each time you peel back just a little bit on some of these policy choices, it’s not the slam dunk that you sometimes hear about as part of discussion. Just you just regulate that. You know, they broke it. They’ve got to fix it sort of thing, because there are there are those kinds of choices. I assume you don’t have much of a crystal ball and it’s tough to know where we’re necessarily going. So rather than us closing by asking what is this landscape going to look like in 12, 12 months or 24 months, I guess I’m curious, are you optimistic that as there is action, because I think it certainly feels like there’s a lot of momentum there, that that countries and politicians are going to get the complexity that that you’re highlighting here? Or are we at a point at a moment in time where there is just there’s the so-called tech-lash and strong momentum towards you got to do something that some of those implications will simply get lost in the rush to do something?
Daphne Keller: I’m not optimistic in the US, and this is part of why I put up that Balkanization piece, because I see people proposing laws that are just ignorant of sort of the known doctrines that can be deployed in intermediary liability. You know, they say, oh, well, let’s just tell platforms to be reasonable without looking at what are platform is likely to do. If they have a vague standard, well, they’re likely to just take everything down to avoid risk. So I think we are at risk in the US of getting laws that are so badly drafted that they might just be unconstitutional. But going through the process of passing a law and then litigating to figure out if it’s unconstitutional is not a really good way to arrive at standards. In Europe, I’m in a way more optimistic. It’s it’s not that I like most of the legal proposals that have been coming out of Europe now, but that’s mostly because they represent a sort of policy tradeoff that I wouldn’t make between free expression protection and harm prevention, for example. But European civil society has been very active on intermediary liability issues for quite a while. And so you tended to see in the legal proposals coming out of the EU at least process protections, you know, at least ideas like if you’re going to use a technical filter to identify supposedly unlawful content, you should have some humans double check to make sure that filter didn’t make a mistake. Or you see legal proposals saying things like you, you should notify big users and give them an opportunity to challenge or that the latest draft of the terrorist content regulation, which is very close to becoming a law there, has some really impressive transparency provisions for government. So saying not just that platforms have to be transparent about what they’re taking down and why, but also that if governments are requesting that content be taken down, they need to tell the public what it is that they’re doing. So we are slowly moving toward kind of knowing what the what the dials and knobs are and what are the things that we can do to help create more protections. And in a way, slowing things down seems like our best chance of building up a more educated set of policy making, more education in the policymaking community so that we get better laws.
Michael Geist: Well, I think you’ve done it. You’ve done a lot to try to help educate because they say the stuff that you’ve been working on, the large databases that highlight the kinds of cases that are out there that allow for a more comparative look as well as some of the analysis is in many ways where people need to start once they’ve once they’ve concluded that there needs to be some kind of policy measures taken or regulatory measures taken. There has to be recognition that’s step one. that’s not the end of the story. That’s really in many ways just the beginning of trying to craft things that are both effective, but also reflect the sort of values that domestically exist as well as constitutional norms and all the other policy priorities that you say can be fiddled with, I suppose, with those knobs and dials.
Daphne Keller: Yeah, well, hopefully we’ll do a good job.
Daphne Keller: Daphne, thanks so much for joining me on the podcast.
Daphne Keller: Thank you, Michael.
Michael Geist: That’s the Law Bytes podcast for this week. If you have comments suggestions or other feedback, write to lawbytes.com. That’s lawbytes at pobox.com. Follow the podcast on Twitter at @lawbytespod or Michael Geist at @mgeist. You can download the latest episodes from my Web site at Michaelgeist.ca or subscribe via RSS, at Apple podcast, Google, or Spotify. The LawBytes Podcast is produced by Gerardo LeBron Laboy. Music by the Laboy brothers: Gerardo and Jose LeBron Laboy. Credit information for the clips featured in this podcast can be found in the show notes for this episode at Michaelgeist.ca. I’m Michael Geist. Thanks for listening and see you next time.
Quickly and accurately convert audio to text with Sonix.
Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.
Thousands of researchers and podcasters use Sonix to automatically transcribe their audio files (*.mp3). Easily convert your mp3 file to text or docx to make your media content more accessible to listeners.
Sonix is the best online audio transcription software in 2019—it’s fast, easy, and affordable.
If you are looking for a great way to convert your mp3 to text, try Sonix today.
(function(s,o,n,i,x) { if(s[n])return;s[n]=true; var j=o.createElement('script');j.type='text/javascript',j.async=true,j.src=i,o.head.appendChild(j); var css=o.createElement("link");css.type="text/css",css.rel="stylesheet",css.href=x,o.head.appendChild(css) })(window,document, "__sonix","//sonix.ai/widget.js","//sonix.ai/widget.css");
The post The LawBytes Podcast, Episode 22: Navigating Intermediary Liability for the Internet – A Conversation with Daphne Keller appeared first on Michael Geist.
from RSSMix.com Mix ID 8247009 http://www.michaelgeist.ca/2019/07/lawbytes-podcast-episode-22/ via http://www.rssmix.com/
0 notes
Text
The Case For Real World Evidence (RWE)
By DAVID SHAYWITZ, MD
Randomized control trials – RCTs – rose to prominence in the twentieth century as physicians and regulators sought to evaluate rigorously the performance of new medical therapies; by century’s end, RCTs had become, as medical historian Laura Bothwell has noted, “the gold standard of medical knowledge,” occupying the top position of the “methodologic heirarch[y].”
The value of RCTs lies in the random, generally blinded, allocation of patients to treatment or control group, an approach that when properly executed minimizes confounders (based on the presumption that any significant confounder would be randomly allocated as well), and enables researchers to discern the efficacy of the intervention (does it work better – or worse – than controls) and begin to evaluate the safety and side-effects.
The power and value of RCTs can be seen with particular clarity in the case of proposed interventions that made so much intuitive sense (at the time) that it seemed questionable, perhaps even immoral, to conduct a study. Examples include use of a particular antiarrhythmic after heart attacks (seemed sensible, but actually caused harm); and use of bone marrow transplants for metastatic breast cancer (study viewed by many as unethical yet revealed no benefit to a procedure associated with significant morbidity).
In these and many other examples, a well-conducted RCT changed clinical practice by delivering a more robust assessment of an emerging technology than instinct and intuition could provide.
RCTs: Golden But Not Perfect
Yet, as Bothwell has eloquently highlighted, RCTs aren’t perfect. For one, not all interventions lend themselves equally well to this approach. While drug studies generally work well (because it’s relatively easy to provide a consistent intervention in a blinded fashion), this can be more difficult, Bothwell observes, in areas such as surgery and psychotherapy.
Another challenge associated with many RCTs is the lengthy cycle time. It can take so long to conduct a study that by the time the results are reported out, science and practice may have moved on; Bothwell notes that by the time much-anticipated COURAGE study of bare metal stents was published, many practitioners were already enthusing about the next big thing, drug-eluting stents.
In addition, the subjects who enroll in clinical trials – as highlighted in a recent Health Affairs article published by authors from Flatiron, Foundation Medicine, and the FDA – may not be representative of either the larger population or of the patients who are likely to receive the intervention currently under study; groups underrepresented in clinical trials include the elderly, minorities, and those with poor performance status (the most debilitated).
This begins to get at what may be the most significant limitations of clinical trials: the ability to generalize results. The issue is that clinical trials, by design, are experiments, often high-stakes experiments from perspective of the subjects (most importantly — it’s their health and often their lives at stake!), as well as the sponsors, who often invest considerable time and capital in the trial. Clinical trial subjects tend to be showered with attention and followed with exceptional care, and study investigators generally do everything in their power to make sure subjects receive their therapy (whether experimental or control) and show up for their follow-up evaluations. Study personnel strive to be extremely responsive to questions and concerns raised by subjects.
But in real practice, YMMV, as they say on the interwebs — your mileage may vary; adherence is less certain, evaluation can be less systematic, and follow-up more sporadic. Conversely, an astute clinician may have figured out a way to make a medicine better, perhaps implementing a helpful tweak based on a new paper or an empiric observation. Thus the performance of a therapy in a clinical trial may rigorously, scientifically benchmark the potential of a new therapy, compared to a control, but not necessarily predict it’s actual real-world performance.
The Challenge Of Assessing Real World Performance
In fact, assessing a product’s real world performance can be surprisingly difficult; in contrast to a clinical trial, which is designed explicitly to follow each patient’s journey and to methodically, conscientiously observe and compulsively track a number of pre-specified parameters, the data available for real world patients is, while perhaps more plentiful, captured in a far less systematic fashion.
The principle vehicle of real world clinical data capture, the electronic health record, was designed to support billing (primarily) as well as the provision of clinical care (eg affording providers access to test results and previous visit notes). Additional contemporary sources of real world data – as nicely summarized in a recent McKinsey review – include administrative/claims data (organized around billable services) and increasingly, patient-generated data (such as from wearables like Fitbits).
Organizing and analyzing data – hmm that seems like just the sort of thing at which today’s tech companies excel, at least outside healthcare. Google’s stated mission is to organize all the world’s information; Amazon leverages sophisticated analytics to optimize the consumer’s purchasing experience. But healthcare, as we all know, can be a troublesome beast. Healthcare data are notoriously fragmented; quality is uneven at best; and the approach to health information privacy doesn’t lend itself to a value system based on asking forgiveness instead of permission.
Even so, tech companies big and small are pouring into the real world evidence (RWE) space; what do they hope to accomplish?
Why Tech Is Embracing RWE
The recent (earlier-cited) Health Affairs paper led by authors at one of the most advanced and successful companies in this space, Flatiron, offers a roadmap of sorts. Using a combination of technology and manual data extraction and classification, Flatiron attempts to generate near-clinical-research grade data from oncology EHR records, supplemented with other data – “most notably, mortality data from national and commercial sources,” according to the authors. (For more on Flatiron, and its recent acquisition by Roche for $2.2B, see here; for discussion of two other oncology data companies, see here.)
In addition to enabling broader patient representation – affording greater visibility into patient outcomes in groups traditionally underrepresented in clinical trials – robust RWE can potentially change the approach to at least some clinical studies by offering the possibility of what Flatiron calls a “contemporary” control arm, and what others like Medidata’s Glen de Vries, who is also keenly interested in this concept, describes as a “synthetic control arm.” The idea is that under some circumstances, it might make sense from a pragmatic perspective and/or from an ethical perspective not to randomize patients to a control arm – for example, if the disease is uniformly fatal, and without known treatment. Under select circumstances, perhaps a clinical trial could be conducted comparing patients receiving a new treatment to the RWE data of patients receiving best available treatment – especially since there are 2017 data from Pfizer and Flatiron showing, at least in the example studied, that their RWE data lines up exceptionally well with recent data from the control arm of an RCT. The implication is that if these RWE could be used in place of a control arm, an RCT could be performed much faster and cheaper – and it might be extremely attractive to participants because everyone would receive the active treatment, and no one would be randomized to the control.
There’s at least one example of this playing out; according to a recent article by Justin Petrone in Nature Biotechnology, Flatiron’s RWE “greased Roche’s path to regulatory approval.” Petrone continues,
“The company relied on Flatiron data to expand the label for Alecensa (alectinib), a treatment for people with non-small-cell lung cancer, to 20 countries. Regulators outside the US wanted more information on controls, and it might have taken Roche a year to satisfy those requirements through another route.”
While the potential of RWE is clearly starting to capture the imagination, many physicians caution that interest in RWE shouldn’t occur at the expense of the RCT gold standard. “As a clinician I would not treat without RCT,” tweeted MGH cardiologist Chris Newton-Cheh, noting “Random, blinded allocation is best protection against biased treatment allocation and confounding.” Or, as Farzad Mostashari tweeted, with characteristic charm, “You will never know the unknown confounders that randomization protected you against. It’s like Batman.”
Even so, RWE –- as described so nicely in the McKinsey review — offers an opportunity to “evaluate new treatments when randomization to placebo for clinical trials may be impossible, impractical, or unethical.” The FDA, for its part, notes,
“In some cases, a “traditional” clinical trial may be impractical or excessively challenging to conduct. Ethical issues regarding treatment assignment, and other similar challenges, may present themselves when developing and attempting to execute a high quality clinical trial. Analyses of RWD [real world data], using appropriate methods, may in some cases provide similar information with comparable or even superior characteristics to information collected and analyzed through a traditional clinical trial.”
While robust RWE isn’t likely to displace the RCT, it may lower the threshold for provisionally embracing an early positive RCT result, knowing that the treatment’s real world performance would be reliably and rapidly evaluable.
Robust RWE also affords the opportunity to better understand other aspects of a product’s performance, including it’s cost (versus other treatments), the populations it seems most effective (enabling the sort of work the led to the approval of pembrolizumab (Keytruda) for patients with high microsatellite instability – for more see here and here), as well as an understanding of populations where it isn’t working, which could lead either to additional labeling restrictions from regulators and/or additional prescribing limitations by payors. Pharma companies would be well served by seeking to substratify in advance, rather than discovering rather quickly after launch that robust RWE suggests a more restricted addressable population than the product team had anticipated.
RWE: Less Prestigious Than RCT, But More Useful?
Thus far, we’ve considered robust RWE as a somewhat imperfect alternative to the scientific gold standard, RCT. But if you take a step back, you might ask why this benchmark takes priority over what may be a more relevant standard – how a product performs in the real world. (As I joked on twitter, this reminds me a bit of a comment biochemists used to make to us yeast geneticists when we saw something in cells not recapitulated in their highly reduced system – a phenomenon they termed, tongue in cheek, an “in vivo artifact.”)
Consider this example: if I wanted to develop a new drug that could reverse type two diabetes, I’d need to prove it worked in two robust RCTs, vetted by the FDA, before I could market it to a single patient. Yet, Virta, a behavioral health company (see here), has a supervised low-carb program they report reverses type two diabetes, and have presented data from a non-randomized trial involving self-selected patients. Unquestionably, a much lower standard.
Yet on the other hand, Virta, and tech-enabled service companies like it, are likely reimbursed (at least in part) only if they deliver particular results – only if they reverse diabetes (and thus reduce costs) in a certain number of patients. Isn’t this, at some level, a higher standard; a pill just has to show it could work to merit reimbursement; Virta actually has to deliver results.
Chris Hogg, a former pharma strategist who is now COO of the digital health company Propeller Health pointed this out on Twitter, noting that service offerings tend to require from payors “proof of use/retention” as well as “proof the solution works in their specific environment.” He adds, “You could flip it to say pharma _only_ needs to show efficacy in highly controlled settings, but services require that and proof of efficacy in real-world settings. Bar might actually be higher for new services.” (Note: tweets quoted in this post lightly edited for clarity).
Health consultant Andrew Matzkin (of Health Advances) then chimed in, “But pharma also has to prove safety to a degree that is just not relevant for most digital health. For digital health, that’s a feature, not a bug, with advantages for patients and for businesses/investors. Flipside is lack of established pathways for payment and distribution.” He added, “And I think that soon, drugs will be held to account for real world efficacy too as RWE becomes better and more widely accepted. Once that happens, the advantages of digital health (with real efficacy data) will be even starker.”
Matzkin continued, “I think the model will be provisional approval and coverage based on less RCT data, followed by RWE monitoring that could result in restricted $ and/or rescinded label indications. So a little different. But real world outcomes will matter. For everything.”
As Mazkin says, it’s hard to imagine that the ability to measure the performance of an intervention in a trusted, near-real-time fashion wouldn’t profoundly disrupt both pharma and healthcare. Moreover, while the opportunity for potentially faster approvals would seem like a real win for pharma companies (and inevitably of concern to industry critics), it’s likely that such a capability – a real world, real time dashboard of exactly how each medicine and treatment is performing in real patients – could also threaten pharma, for all the right reasons.
Drugs that fail to make a difference in the real world would be rapidly surfaced, as would some subtle safety issues. Performance-based drug contracting, which historically was always discussed but seldom implemented due to the challenge of refereeing, could potentially be meaningfully enabled by trusted real-world reporting. Conversely, new medicines that don’t seem meaningfully different in an RCT setting could turn out to be much more effective in a real world setting if they are actually better tolerated and embraced by actual patients. In addition, approaches (digital or not) that impact real world performance would likely be prioritized as well, and optimization could occur continuously, informed by reliable RWE.
In short, at its best, real world evidence provides an opportunity to evaluate medical interventions on what arguably matters most – real world performance; and evidence delivered reliably and in near real time would provide a meaningful incentive to optimize on this foundational measure, potentially bringing patient, provider, payor, and manufacturer into better alignment, while also sharpening what are likely to be real remaining differences, such as determining the value of a particular increase in performance.
A Dynamic Balance
Which brings us to my last question: what is the optimal role of RCTs in a world with quality, trustworthy RWE (not that we’re quite there yet….). The reflexive answer, of course, is RCTs for everything, followed by RWE. Perhaps this is right, though I wonder if there are circumstances when it’s preferable, or at least permissible (perhaps because the risk is low, as Matzkin suggested in the context of digital health interventions) to develop a less robust clinical trial evidence base, and focus on delivering, and optimizing real world performance.
My hope is that, unlike our tribal politics, we approach this question with humility and nuance, sensitive to the idea that the “right” approach might vary by circumstance. Figuring out the right balance here promises to be a difficult and dynamic process, a challenging problem with which we should all be grateful for the opportunity to wrestle.
David Shaywitz is a Senior Partner with Takada Ventures and a Visiting Scientist at Harvard Medical School.
The Case For Real World Evidence (RWE) published first on https://wittooth.tumblr.com/
0 notes
Text
Real World Evidence (RWE) vs Randomized Control Trials (RCT): The Battle For the Future of Medicine
By DAVID SHAYWITZ, MD
Randomized control trials – RCTs – rose to prominence in the twentieth century as physicians and regulators sought to evaluate rigorously the performance of new medical therapies; by century’s end, RCTs had become, as medical historian Laura Bothwell has noted, “the gold standard of medical knowledge,” occupying the top position of the “methodologic heirarch[y].”
The value of RCTs lies in the random, generally blinded, allocation of patients to treatment or control group, an approach that when properly executed minimizes confounders (based on the presumption that any significant confounder would be randomly allocated as well), and enables researchers to discern the efficacy of the intervention (does it work better – or worse – than controls) and begin to evaluate the safety and side-effects.
The power and value of RCTs can be seen with particular clarity in the case of proposed interventions that made so much intuitive sense (at the time) that it seemed questionable, perhaps even immoral, to conduct a study. Examples include use of a particular antiarrhythmic after heart attacks (seemed sensible, but actually caused harm); and use of bone marrow transplants for metastatic breast cancer (study viewed by many as unethical yet revealed no benefit to a procedure associated with significant morbidity).
In these and many other examples, a well-conducted RCT changed clinical practice by delivering a more robust assessment of an emerging technology than instinct and intuition could provide.
RCTs: Golden But Not Perfect
Yet, as Bothwell has eloquently highlighted, RCTs aren’t perfect. For one, not all interventions lend themselves equally well to this approach. While drug studies generally work well (because it’s relatively easy to provide a consistent intervention in a blinded fashion), this can be more difficult, Bothwell observes, in areas such as surgery and psychotherapy.
Another challenge associated with many RCTs is the lengthy cycle time. It can take so long to conduct a study that by the time the results are reported out, science and practice may have moved on; Bothwell notes that by the time much-anticipated COURAGE study of bare metal stents was published, many practitioners were already enthusing about the next big thing, drug-eluting stents.
In addition, the subjects who enroll in clinical trials – as highlighted in a recent Health Affairs article published by authors from Flatiron, Foundation Medicine, and the FDA – may not be representative of either the larger population or of the patients who are likely to receive the intervention currently under study; groups underrepresented in clinical trials include the elderly, minorities, and those with poor performance status (the most debilitated).
This begins to get at what may be the most significant limitations of clinical trials: the ability to generalize results. The issue is that clinical trials, by design, are experiments, often high-stakes experiments from perspective of the subjects (most importantly — it’s their health and often their lives at stake!), as well as the sponsors, who often invest considerable time and capital in the trial. Clinical trial subjects tend to be showered with attention and followed with exceptional care, and study investigators generally do everything in their power to make sure subjects receive their therapy (whether experimental or control) and show up for their follow-up evaluations. Study personnel strive to be extremely responsive to questions and concerns raised by subjects.
But in real practice, YMMV, as they say on the interwebs — your mileage may vary; adherence is less certain, evaluation can be less systematic, and follow-up more sporadic. Conversely, an astute clinician may have figured out a way to make a medicine better, perhaps implementing a helpful tweak based on a new paper or an empiric observation. Thus the performance of a therapy in a clinical trial may rigorously, scientifically benchmark the potential of a new therapy, compared to a control, but not necessarily predict it’s actual real-world performance.
The Challenge Of Assessing Real World Performance
In fact, assessing a product’s real world performance can be surprisingly difficult; in contrast to a clinical trial, which is designed explicitly to follow each patient’s journey and to methodically, conscientiously observe and compulsively track a number of pre-specified parameters, the data available for real world patients is, while perhaps more plentiful, captured in a far less systematic fashion.
The principle vehicle of real world clinical data capture, the electronic health record, was designed to support billing (primarily) as well as the provision of clinical care (eg affording providers access to test results and previous visit notes). Additional contemporary sources of real world data – as nicely summarized in a recent McKinsey review – include administrative/claims data (organized around billable services) and increasingly, patient-generated data (such as from wearables like Fitbits).
Organizing and analyzing data – hmm that seems like just the sort of thing at which today’s tech companies excel, at least outside healthcare. Google’s stated mission is to organize all the world’s information; Amazon leverages sophisticated analytics to optimize the consumer’s purchasing experience. But healthcare, as we all know, can be a troublesome beast. Healthcare data are notoriously fragmented; quality is uneven at best; and the approach to health information privacy doesn’t lend itself to a value system based on asking forgiveness instead of permission.
Even so, tech companies big and small are pouring into the real world evidence (RWE) space; what do they hope to accomplish?
Why Tech Is Embracing RWE
The recent (earlier-cited) Health Affairs paper led by authors at one of the most advanced and successful companies in this space, Flatiron, offers a roadmap of sorts. Using a combination of technology and manual data extraction and classification, Flatiron attempts to generate near-clinical-research grade data from oncology EHR records, supplemented with other data – “most notably, mortality data from national and commercial sources,” according to the authors. (For more on Flatiron, and its recent acquisition by Roche for $2.2B, see here; for discussion of two other oncology data companies, see here.)
In addition to enabling broader patient representation – affording greater visibility into patient outcomes in groups traditionally underrepresented in clinical trials – robust RWE can potentially change the approach to at least some clinical studies by offering the possibility of what Flatiron calls a “contemporary” control arm, and what others like Medidata’s Glen de Vries, who is also keenly interested in this concept, describes as a “synthetic control arm.” The idea is that under some circumstances, it might make sense from a pragmatic perspective and/or from an ethical perspective not to randomize patients to a control arm – for example, if the disease is uniformly fatal, and without known treatment. Under select circumstances, perhaps a clinical trial could be conducted comparing patients receiving a new treatment to the RWE data of patients receiving best available treatment – especially since there are 2017 data from Pfizer and Flatiron showing, at least in the example studied, that their RWE data lines up exceptionally well with recent data from the control arm of an RCT. The implication is that if these RWE could be used in place of a control arm, an RCT could be performed much faster and cheaper – and it might be extremely attractive to participants because everyone would receive the active treatment, and no one would be randomized to the control.
There’s at least one example of this playing out; according to a recent article by Justin Petrone in Nature Biotechnology, Flatiron’s RWE “greased Roche’s path to regulatory approval.” Petrone continues,
“The company relied on Flatiron data to expand the label for Alecensa (alectinib), a treatment for people with non-small-cell lung cancer, to 20 countries. Regulators outside the US wanted more information on controls, and it might have taken Roche a year to satisfy those requirements through another route.”
While the potential of RWE is clearly starting to capture the imagination, many physicians caution that interest in RWE shouldn’t occur at the expense of the RCT gold standard. “As a clinician I would not treat without RCT,” tweeted MGH cardiologist Chris Newton-Cheh, noting “Random, blinded allocation is best protection against biased treatment allocation and confounding.” Or, as Farzad Mostashari tweeted, with characteristic charm, “You will never know the unknown confounders that randomization protected you against. It’s like Batman.”
Even so, RWE –- as described so nicely in the McKinsey review — offers an opportunity to “evaluate new treatments when randomization to placebo for clinical trials may be impossible, impractical, or unethical.” The FDA, for its part, notes,
“In some cases, a “traditional” clinical trial may be impractical or excessively challenging to conduct. Ethical issues regarding treatment assignment, and other similar challenges, may present themselves when developing and attempting to execute a high quality clinical trial. Analyses of RWD [real world data], using appropriate methods, may in some cases provide similar information with comparable or even superior characteristics to information collected and analyzed through a traditional clinical trial.”
While robust RWE isn’t likely to displace the RCT, it may lower the threshold for provisionally embracing an early positive RCT result, knowing that the treatment’s real world performance would be reliably and rapidly evaluable.
Robust RWE also affords the opportunity to better understand other aspects of a product’s performance, including it’s cost (versus other treatments), the populations it seems most effective (enabling the sort of work the led to the approval of pembrolizumab (Keytruda) for patients with high microsatellite instability – for more see here and here), as well as an understanding of populations where it isn’t working, which could lead either to additional labeling restrictions from regulators and/or additional prescribing limitations by payors. Pharma companies would be well served by seeking to substratify in advance, rather than discovering rather quickly after launch that robust RWE suggests a more restricted addressable population than the product team had anticipated.
RWE: Less Prestigious Than RCT, But More Useful?
Thus far, we’ve considered robust RWE as a somewhat imperfect alternative to the scientific gold standard, RCT. But if you take a step back, you might ask why this benchmark takes priority over what may be a more relevant standard – how a product performs in the real world. (As I joked on twitter, this reminds me a bit of a comment biochemists used to make to us yeast geneticists when we saw something in cells not recapitulated in their highly reduced system – a phenomenon they termed, tongue in cheek, an “in vivo artifact.”)
Consider this example: if I wanted to develop a new drug that could reverse type two diabetes, I’d need to prove it worked in two robust RCTs, vetted by the FDA, before I could market it to a single patient. Yet, Virta, a behavioral health company (see here), has a supervised low-carb program they report reverses type two diabetes, and have presented data from a non-randomized trial involving self-selected patients. Unquestionably, a much lower standard.
Yet on the other hand, Virta, and tech-enabled service companies like it, are likely reimbursed (at least in part) only if they deliver particular results – only if they reverse diabetes (and thus reduce costs) in a certain number of patients. Isn’t this, at some level, a higher standard; a pill just has to show it could work to merit reimbursement; Virta actually has to deliver results.
Chris Hogg, a former pharma strategist who is now COO of the digital health company Propeller Health pointed this out on Twitter, noting that service offerings tend to require from payors “proof of use/retention” as well as “proof the solution works in their specific environment.” He adds, “You could flip it to say pharma _only_ needs to show efficacy in highly controlled settings, but services require that and proof of efficacy in real-world settings. Bar might actually be higher for new services.” (Note: tweets quoted in this post lightly edited for clarity).
Health consultant Andrew Matzkin (of Health Advances) then chimed in, “But pharma also has to prove safety to a degree that is just not relevant for most digital health. For digital health, that’s a feature, not a bug, with advantages for patients and for businesses/investors. Flipside is lack of established pathways for payment and distribution.” He added, “And I think that soon, drugs will be held to account for real world efficacy too as RWE becomes better and more widely accepted. Once that happens, the advantages of digital health (with real efficacy data) will be even starker.”
Matzkin continued, “I think the model will be provisional approval and coverage based on less RCT data, followed by RWE monitoring that could result in restricted $ and/or rescinded label indications. So a little different. But real world outcomes will matter. For everything.”
As Mazkin says, it’s hard to imagine that the ability to measure the performance of an intervention in a trusted, near-real-time fashion wouldn’t profoundly disrupt both pharma and healthcare. Moreover, while the opportunity for potentially faster approvals would seem like a real win for pharma companies (and inevitably of concern to industry critics), it’s likely that such a capability – a real world, real time dashboard of exactly how each medicine and treatment is performing in real patients – could also threaten pharma, for all the right reasons.
Drugs that fail to make a difference in the real world would be rapidly surfaced, as would some subtle safety issues. Performance-based drug contracting, which historically was always discussed but seldom implemented due to the challenge of refereeing, could potentially be meaningfully enabled by trusted real-world reporting. Conversely, new medicines that don’t seem meaningfully different in an RCT setting could turn out to be much more effective in a real world setting if they are actually better tolerated and embraced by actual patients. In addition, approaches (digital or not) that impact real world performance would likely be prioritized as well, and optimization could occur continuously, informed by reliable RWE.
In short, at its best, real world evidence provides an opportunity to evaluate medical interventions on what arguably matters most – real world performance; and evidence delivered reliably and in near real time would provide a meaningful incentive to optimize on this foundational measure, potentially bringing patient, provider, payor, and manufacturer into better alignment, while also sharpening what are likely to be real remaining differences, such as determining the value of a particular increase in performance.
A Dynamic Balance
Which brings us to my last question: what is the optimal role of RCTs in a world with quality, trustworthy RWE (not that we’re quite there yet….). The reflexive answer, of course, is RCTs for everything, followed by RWE. Perhaps this is right, though I wonder if there are circumstances when it’s preferable, or at least permissible (perhaps because the risk is low, as Matzkin suggested in the context of digital health interventions) to develop a less robust clinical trial evidence base, and focus on delivering, and optimizing real world performance.
My hope is that, unlike our tribal politics, we approach this question with humility and nuance, sensitive to the idea that the “right” approach might vary by circumstance. Figuring out the right balance here promises to be a difficult and dynamic process, a challenging problem with which we should all be grateful for the opportunity to wrestle.
David Shaywitz is a Senior Partner with Takada Ventures and a Visiting Scientist at Harvard Medical School.
Article source:The Health Care Blog
0 notes
Text
Real World Evidence (RWE) vs Randomized Control Trials (RCT): The Battle For the Future of Medicine
By DAVID SHAYWITZ, MD
Randomized control trials – RCTs – rose to prominence in the twentieth century as physicians and regulators sought to evaluate rigorously the performance of new medical therapies; by century’s end, RCTs had become, as medical historian Laura Bothwell has noted, “the gold standard of medical knowledge,” occupying the top position of the “methodologic heirarch[y].”
The value of RCTs lies in the random, generally blinded, allocation of patients to treatment or control group, an approach that when properly executed minimizes confounders (based on the presumption that any significant confounder would be randomly allocated as well), and enables researchers to discern the efficacy of the intervention (does it work better – or worse – than controls) and begin to evaluate the safety and side-effects.
The power and value of RCTs can be seen with particular clarity in the case of proposed interventions that made so much intuitive sense (at the time) that it seemed questionable, perhaps even immoral, to conduct a study. Examples include use of a particular antiarrhythmic after heart attacks (seemed sensible, but actually caused harm); and use of bone marrow transplants for metastatic breast cancer (study viewed by many as unethical yet revealed no benefit to a procedure associated with significant morbidity).
In these and many other examples, a well-conducted RCT changed clinical practice by delivering a more robust assessment of an emerging technology than instinct and intuition could provide.
RCTs: Golden But Not Perfect
Yet, as Bothwell has eloquently highlighted, RCTs aren’t perfect. For one, not all interventions lend themselves equally well to this approach. While drug studies generally work well (because it’s relatively easy to provide a consistent intervention in a blinded fashion), this can be more difficult, Bothwell observes, in areas such as surgery and psychotherapy.
Another challenge associated with many RCTs is the lengthy cycle time. It can take so long to conduct a study that by the time the results are reported out, science and practice may have moved on; Bothwell notes that by the time much-anticipated COURAGE study of bare metal stents was published, many practitioners were already enthusing about the next big thing, drug-eluting stents.
In addition, the subjects who enroll in clinical trials – as highlighted in a recent Health Affairs article published by authors from Flatiron, Foundation Medicine, and the FDA – may not be representative of either the larger population or of the patients who are likely to receive the intervention currently under study; groups underrepresented in clinical trials include the elderly, minorities, and those with poor performance status (the most debilitated).
This begins to get at what may be the most significant limitations of clinical trials: the ability to generalize results. The issue is that clinical trials, by design, are experiments, often high-stakes experiments from perspective of the subjects (most importantly — it’s their health and often their lives at stake!), as well as the sponsors, who often invest considerable time and capital in the trial. Clinical trial subjects tend to be showered with attention and followed with exceptional care, and study investigators generally do everything in their power to make sure subjects receive their therapy (whether experimental or control) and show up for their follow-up evaluations. Study personnel strive to be extremely responsive to questions and concerns raised by subjects.
But in real practice, YMMV, as they say on the interwebs — your mileage may vary; adherence is less certain, evaluation can be less systematic, and follow-up more sporadic. Conversely, an astute clinician may have figured out a way to make a medicine better, perhaps implementing a helpful tweak based on a new paper or an empiric observation. Thus the performance of a therapy in a clinical trial may rigorously, scientifically benchmark the potential of a new therapy, compared to a control, but not necessarily predict it’s actual real-world performance.
The Challenge Of Assessing Real World Performance
In fact, assessing a product’s real world performance can be surprisingly difficult; in contrast to a clinical trial, which is designed explicitly to follow each patient’s journey and to methodically, conscientiously observe and compulsively track a number of pre-specified parameters, the data available for real world patients is, while perhaps more plentiful, captured in a far less systematic fashion.
The principle vehicle of real world clinical data capture, the electronic health record, was designed to support billing (primarily) as well as the provision of clinical care (eg affording providers access to test results and previous visit notes). Additional contemporary sources of real world data – as nicely summarized in a recent McKinsey review – include administrative/claims data (organized around billable services) and increasingly, patient-generated data (such as from wearables like Fitbits).
Organizing and analyzing data – hmm that seems like just the sort of thing at which today’s tech companies excel, at least outside healthcare. Google’s stated mission is to organize all the world’s information; Amazon leverages sophisticated analytics to optimize the consumer’s purchasing experience. But healthcare, as we all know, can be a troublesome beast. Healthcare data are notoriously fragmented; quality is uneven at best; and the approach to health information privacy doesn’t lend itself to a value system based on asking forgiveness instead of permission.
Even so, tech companies big and small are pouring into the real world evidence (RWE) space; what do they hope to accomplish?
Why Tech Is Embracing RWE
The recent (earlier-cited) Health Affairs paper led by authors at one of the most advanced and successful companies in this space, Flatiron, offers a roadmap of sorts. Using a combination of technology and manual data extraction and classification, Flatiron attempts to generate near-clinical-research grade data from oncology EHR records, supplemented with other data – “most notably, mortality data from national and commercial sources,” according to the authors. (For more on Flatiron, and its recent acquisition by Roche for $2.2B, see here; for discussion of two other oncology data companies, see here.)
In addition to enabling broader patient representation – affording greater visibility into patient outcomes in groups traditionally underrepresented in clinical trials – robust RWE can potentially change the approach to at least some clinical studies by offering the possibility of what Flatiron calls a “contemporary” control arm, and what others like Medidata’s Glen de Vries, who is also keenly interested in this concept, describes as a “synthetic control arm.” The idea is that under some circumstances, it might make sense from a pragmatic perspective and/or from an ethical perspective not to randomize patients to a control arm – for example, if the disease is uniformly fatal, and without known treatment. Under select circumstances, perhaps a clinical trial could be conducted comparing patients receiving a new treatment to the RWE data of patients receiving best available treatment – especially since there are 2017 data from Pfizer and Flatiron showing, at least in the example studied, that their RWE data lines up exceptionally well with recent data from the control arm of an RCT. The implication is that if these RWE could be used in place of a control arm, an RCT could be performed much faster and cheaper – and it might be extremely attractive to participants because everyone would receive the active treatment, and no one would be randomized to the control.
There’s at least one example of this playing out; according to a recent article by Justin Petrone in Nature Biotechnology, Flatiron’s RWE “greased Roche’s path to regulatory approval.” Petrone continues,
“The company relied on Flatiron data to expand the label for Alecensa (alectinib), a treatment for people with non-small-cell lung cancer, to 20 countries. Regulators outside the US wanted more information on controls, and it might have taken Roche a year to satisfy those requirements through another route.”
While the potential of RWE is clearly starting to capture the imagination, many physicians caution that interest in RWE shouldn’t occur at the expense of the RCT gold standard. “As a clinician I would not treat without RCT,” tweeted MGH cardiologist Chris Newton-Cheh, noting “Random, blinded allocation is best protection against biased treatment allocation and confounding.” Or, as Farzad Mostashari tweeted, with characteristic charm, “You will never know the unknown confounders that randomization protected you against. It’s like Batman.”
Even so, RWE –- as described so nicely in the McKinsey review — offers an opportunity to “evaluate new treatments when randomization to placebo for clinical trials may be impossible, impractical, or unethical.” The FDA, for its part, notes,
“In some cases, a “traditional” clinical trial may be impractical or excessively challenging to conduct. Ethical issues regarding treatment assignment, and other similar challenges, may present themselves when developing and attempting to execute a high quality clinical trial. Analyses of RWD [real world data], using appropriate methods, may in some cases provide similar information with comparable or even superior characteristics to information collected and analyzed through a traditional clinical trial.”
While robust RWE isn’t likely to displace the RCT, it may lower the threshold for provisionally embracing an early positive RCT result, knowing that the treatment’s real world performance would be reliably and rapidly evaluable.
Robust RWE also affords the opportunity to better understand other aspects of a product’s performance, including it’s cost (versus other treatments), the populations it seems most effective (enabling the sort of work the led to the approval of pembrolizumab (Keytruda) for patients with high microsatellite instability – for more see here and here), as well as an understanding of populations where it isn’t working, which could lead either to additional labeling restrictions from regulators and/or additional prescribing limitations by payors. Pharma companies would be well served by seeking to substratify in advance, rather than discovering rather quickly after launch that robust RWE suggests a more restricted addressable population than the product team had anticipated.
RWE: Less Prestigious Than RCT, But More Useful?
Thus far, we’ve considered robust RWE as a somewhat imperfect alternative to the scientific gold standard, RCT. But if you take a step back, you might ask why this benchmark takes priority over what may be a more relevant standard – how a product performs in the real world. (As I joked on twitter, this reminds me a bit of a comment biochemists used to make to us yeast geneticists when we saw something in cells not recapitulated in their highly reduced system – a phenomenon they termed, tongue in cheek, an “in vivo artifact.”)
Consider this example: if I wanted to develop a new drug that could reverse type two diabetes, I’d need to prove it worked in two robust RCTs, vetted by the FDA, before I could market it to a single patient. Yet, Virta, a behavioral health company (see here), has a supervised low-carb program they report reverses type two diabetes, and have presented data from a non-randomized trial involving self-selected patients. Unquestionably, a much lower standard.
Yet on the other hand, Virta, and tech-enabled service companies like it, are likely reimbursed (at least in part) only if they deliver particular results – only if they reverse diabetes (and thus reduce costs) in a certain number of patients. Isn’t this, at some level, a higher standard; a pill just has to show it could work to merit reimbursement; Virta actually has to deliver results.
Chris Hogg, a former pharma strategist who is now COO of the digital health company Propeller Health pointed this out on Twitter, noting that service offerings tend to require from payors “proof of use/retention” as well as “proof the solution works in their specific environment.” He adds, “You could flip it to say pharma _only_ needs to show efficacy in highly controlled settings, but services require that and proof of efficacy in real-world settings. Bar might actually be higher for new services.” (Note: tweets quoted in this post lightly edited for clarity).
Health consultant Andrew Matzkin (of Health Advances) then chimed in, “But pharma also has to prove safety to a degree that is just not relevant for most digital health. For digital health, that’s a feature, not a bug, with advantages for patients and for businesses/investors. Flipside is lack of established pathways for payment and distribution.” He added, “And I think that soon, drugs will be held to account for real world efficacy too as RWE becomes better and more widely accepted. Once that happens, the advantages of digital health (with real efficacy data) will be even starker.”
Matzkin continued, “I think the model will be provisional approval and coverage based on less RCT data, followed by RWE monitoring that could result in restricted $ and/or rescinded label indications. So a little different. But real world outcomes will matter. For everything.”
As Mazkin says, it’s hard to imagine that the ability to measure the performance of an intervention in a trusted, near-real-time fashion wouldn’t profoundly disrupt both pharma and healthcare. Moreover, while the opportunity for potentially faster approvals would seem like a real win for pharma companies (and inevitably of concern to industry critics), it’s likely that such a capability – a real world, real time dashboard of exactly how each medicine and treatment is performing in real patients – could also threaten pharma, for all the right reasons.
Drugs that fail to make a difference in the real world would be rapidly surfaced, as would some subtle safety issues. Performance-based drug contracting, which historically was always discussed but seldom implemented due to the challenge of refereeing, could potentially be meaningfully enabled by trusted real-world reporting. Conversely, new medicines that don’t seem meaningfully different in an RCT setting could turn out to be much more effective in a real world setting if they are actually better tolerated and embraced by actual patients. In addition, approaches (digital or not) that impact real world performance would likely be prioritized as well, and optimization could occur continuously, informed by reliable RWE.
In short, at its best, real world evidence provides an opportunity to evaluate medical interventions on what arguably matters most – real world performance; and evidence delivered reliably and in near real time would provide a meaningful incentive to optimize on this foundational measure, potentially bringing patient, provider, payor, and manufacturer into better alignment, while also sharpening what are likely to be real remaining differences, such as determining the value of a particular increase in performance.
A Dynamic Balance
Which brings us to my last question: what is the optimal role of RCTs in a world with quality, trustworthy RWE (not that we’re quite there yet….). The reflexive answer, of course, is RCTs for everything, followed by RWE. Perhaps this is right, though I wonder if there are circumstances when it’s preferable, or at least permissible (perhaps because the risk is low, as Matzkin suggested in the context of digital health interventions) to develop a less robust clinical trial evidence base, and focus on delivering, and optimizing real world performance.
My hope is that, unlike our tribal politics, we approach this question with humility and nuance, sensitive to the idea that the “right” approach might vary by circumstance. Figuring out the right balance here promises to be a difficult and dynamic process, a challenging problem with which we should all be grateful for the opportunity to wrestle.
David Shaywitz is a Senior Partner with Takada Ventures and a Visiting Scientist at Harvard Medical School.
Real World Evidence (RWE) vs Randomized Control Trials (RCT): The Battle For the Future of Medicine published first on https://wittooth.tumblr.com/
0 notes