Banning TikTok Was Wrong; Ignoring the Ban is Lawlessness

This article first appeared in the Autumn 2025 issue of 2600 The Hacker Quarterly

It started with an Executive Order issued on August 6, 2020, by President Trump that sought to ban American companies or persons from doing business with TikTok’s parent company ByteDance or any of its subsidiaries. This is ostensibly because ByteDance is a company in the People’s Republic of China which posed a security threat to the United States. Not long after, on August 14, 2020, Trump issues a second Executive Order, this time directing ByteDance to divest all operations in the United States in 90 days. This is the actual first attempt at a ban of TikTok in the United States.

This results in TikTok suing the Trump administration for violation of due process in its executive orders.

Joe Biden is elected president in November of that year and shortly into his term in February 2021, he brings to a halt Trump’s plan to ban TikTok by postponing the legal cases that were working their way through the courts.

Things were pretty quiet about a TikTok ban for a good while, but there were controversies about the app, such as the data it collected and behavior of the algorithm.

Then on December 2, 2022, during a talk at Michigan University’s Ford School of Public Policy FBI Director Christopher Wray raises concerns that the Chinese Government can use the recommendation algorithm of TikTok to manipulate content for influence operations. Among the things he said here was “… so all of these things are in the hands of a government that doesn’t share our values, and that has a mission that’s very much at odds with what’s in the best interests of the United States…” Now remember this quote. Among all the scare tactics of invasions of privacy and potential for espionage is this one truth.
People in the United States government object to the content shared on TikTok. The speech presented by the app and the algorithm. For if it was about data harvesting as they claim, the Chinese-owned apps Temu and Shein are much worse in regard to that behavior bet they sell goods, they don’t provide content. Any bans so far have overlooked these companies and others from other countries or even domestically that harvest and sell our data. Surveillance Capitalism, the driving economic force of the Internet, has data brokering as its foundation.

In this vein of sharing user data with the Chinese government in February of 2022, both the FCC and FBI warn of this possibility, and the White House orders that TikTok is to be deleted from all government-issued devices.

The next move by the United States government was when over a year later, on March 23, 2023 TikTok CEO Shou ZI Chew is brought before a congressional committee for almost 6 hours of Sinophobia (though Chew is from Singapore, and TikTok at the time was based in Los Angeles and Singapore, and not available in China), misunderstanding of technology, and unfounded accusations of connection to and control of the CCP that echo and expand on Wray’s comments four months earlier.

Legislation is put forward to ban TikTok, but it fails to find support in the congress for many months until a year later, in March of 2024, the House of Representatives passes the TikTok sell-or-ban bill. In April, the Senate does the same and when it was delivered to President Biden’s desk he signed the legislation making it law. TikTok and ByteDance sue the Federal government on First Amendment grounds and both a court of appeals and the Supreme Court uphold the law. By law, TikTok is banned as of January 19, 2025.

So what happened between March of 2023 and March of 2024 that overcame the initial resistance to ban the app, making it the law of the land? The answer lies in a historical event that happened in late 2023 and the coverage of what came after on TikTok. This is the Hamas attack on Israel on October 7, 2023, and Israel’s genocidal response to that attack.

It’s not often talked about, but the United States Economy is driven by war. The United States spends more on their military than the rest of the world spends on theirs combined. America’s defense industry, when you count contractors and manufacturers of arms and military equipment, is the largest employer in the country. This is the Military-Industrial complex that Eisenhower warned the people of in his farewell address of January 17, 1961. If the American Empire is not directly fighting in conflicts, it will often provide or sell arms to its allies and proxies. The United States has a long history of supporting Israel and the Zionist project on which it is founded. Under President Joe Biden, American weapons and
American foreign policy made possible a genocide of the Palestinian people.

The American government’s position in the Palestian genocide was in support of the genocide. This was official American policy to support Israel unconditionally, even contravening both domestic and international laws to do so.

American mass media toed the line, and a pro-Israel / anti-Palestine narrative was the norm in print and television. There was no nuance in the discussions, with people taking binary positions with no room for actual discussion or the human cost. (See my previous article in the Spring 2024 issue)

However, on TikTok, a different picture of the conflict was being made. Palestinian creators could share their lived experiences directly, without being filtered through Israeli Hasbara (explanations/propaganda) these videos were shared widely, and how the TikTok algorithm works, many people were exposed to the genocide directly without the governments supporting the eradication of a people putting their spin and justification of it.

This was the real concern of Democrats and Republicans both, that young people mostly were getting a narrative that was, in the words of Director Wren, “very much at odds with what’s in the best interests of the United States [Government]” on a platform they did not control. Other social media platforms were compliant with cooperating with the interests of the American government. Meta, for example, suppressed posts on Instagram and Threads by Palestinians or those who had pro-Palestinian stances. But on TikTok, there was an unhindered view of Palestinian suffering and resistance.

The TikTok ban was always conditional. It was a strong-arm tactic for ByteDance to divest their ownership in favor of American ownership. An American that they hoped would be more on board with American narratives.

Well, ByteDance never divested, and in the waning days of the Biden administration, the ban went into effect, making TikTok (and other Apps owned by ByteDance, such as the Marvel Snap game) unavailable in the United States. For about a day, The following day, American TikTok users were greeted with a message that thanks to incoming President Trump, there was an agreement to keep TikTok active in the United States.

If there is one thing we know about Trump, he doesn’t make any deals from which he doesn’t profit or get something of value. This new post-ban era of TikTok is operating (illegally) under the good graces of Trump. It now is doing business so as it does not upset the powers that be, and now is under the thumb of the United States Government. The app has even returned to the Google Play Store and Apple App Store as of this writing.

All levels of Government are ignoring that TikTok is operating illegally according to a law passed by Congress, signed by the President, and upheld by the courts. And this small thing is done to normalize this. TikTok is widely popular, and the Ban as censorious and wrong as it is is widely unpopular. If a law were to be ignored, this is a wily choice for the first one. And make no mistake, this ignoring of a law and court ruling on the first day of the Trump administration is a first one, one I predict of many.

As of the writing of this article in the first week of March, 2025, the actions of Elon Musk’s DOGE are being overturned in the courts, with decisions saying they are clearly breaking the law, and the general consensus is waiting to see if the Executive branch complies with the courts. My prediction is that the Trump Administration will continue with lawlessness. Ignoring any statute or court opinion contrary to their agenda.

And now 2 weeks later, working on a second draft of this article, the Trump administration has targeted legal residents (Green Card holders) who hold pro-Palestinian views for deportation, attempting to skip over the usual due process afforded Green Card holders, and branding them criminals and terrorists for not supporting the American-funded genocide by Israel against the Palestinian people in Gaza. The first being Mahmoud Khalil, who is not charged with any crime unless you imagine that we live in a time where thoughtcrime is prosecutable. Others have now followed.

And this is how it starts. Authoritarians will begin with things that are actually popular. Like ignoring a law that would keep people from their favorite app. Persecuting a human group that at most makes up 1.4 % of the population, such as passing a law that affects less than 10 college athletes out of over 510,000. Fascism starts small to make bigger moves later. It’s “just” ignoring an unpopular ban before other laws, laws that protect the vulnerable, get ignored. It’s “just” persecuting trans people, until they use the same mechanisms to persecute other human groups, maybe even one you find yourself in.

Shout Outs: Sista, Owlerine, Raincoaster, Cosmic Surfer

Biohacking and Bodily Autonomy

The New Da Vinci Code
Representation of the digitization with binary code and circuit board

Bodily autonomy is the fundamental human right from which all others are derived.

I believe we should have the right to safely modify our bodies, however we wish, even in ways that do not have broad societal acceptance.

Many body modifications are medically necessary for life, like the removal of my gallbladder when it became riddled with stones. My double lung transplant when my immune system decided to attack my healthy lungs, filling them with scarring and fibrosis. Other medically necessary modifications exist such as replacing joints that have failed with artificial ones, or repairing bone fractures with plates and pins. An amputation of a limb due to injury or cancer. But if I wanted to lop off a finger or ear for aesthetic reasons, that should be my right to do so, and not only if necessitated to treat an illness or disorder.

I have a MedPort in my chest to receive intravenous medical treatments more easily. This is an enhancement via technology. Some silicon and titanium that has a function. My body records the scars I have from injuries and my handful of surgeries. I have a tattoo on my sternum incorporating my largest surgery scar, my ear is pierced, and I paint my nails. These are all permanent and temporary bodily modifications for pure aesthetics. Pretty tame and somewhat accepted widely (though certainly not universally), but if I or anyone wanted to add scars, pigment, or implants to our body just because we would like how it looks, or it gives us additional function, nothing should prevent us from doing so.

My body is mine. Any choices to alter it should belong to me. If I want to microdose estrogen as a treat, absent of any dysphoria, I should be able to do so just because I desire to do so. The fact that trans people have to be medicalized or pathologized and made to jump through bureaucratic hoops to be “allowed” to transition is ridiculous. It’s just medical oppression. People should be allowed to transition just because they want to. For decades past, and until this very day and age, trans patients have to follow a specific script and have a specific presentation to get prescribed HRT. This knowledge circulates within the trans community as a way for those who have gone before to help those who come after get past the guardians at the gates. This is why #TransRights are human rights and will benefit everyone if they are protected. Whenever we protect the rights of marginalized people, those rights still apply to everyone else who doesn’t necessarily share their marginalization. It all comes down to bodily autonomy. If you are transgender, transhuman, a hobbyist biohacker, or just want a piercing or body art, the right to hack or alter ourselves should be inviolate, and this right should extend to everyone.

Our bodies belong to ourselves, and we should have control to hack, modify, or alter ourselves physically, chemically, or technologically without interference or hindrance.

This essay was adapted from a Bluesky thread.

#ProjectBasilisk: Deconstructing Roko’s Basilisk

Catholicism was my cradle religion.  As a teenager, I was an explorer and participated in many Christian Denominations, often at the same time, even though the different churches I attended would think of each other as heretical.  I was exposed to and studied many different schools of Christian theology and even a little bit of Judaism.  Later in life, I had an atheist phase, and the culture of debating theists (though universally they were Christians of some stripe) and having pithy or logical answers to different common arguments known as Christian apologetics. One of the weakest of these apologetics is Pascal’s Wager.

Pascal’s Wager is named after its creator Blaise Pascal (1623-1662) who used game theory to “prove” that even though knowing the existence of god is impossible,  the safe bet was to believe in the Christian God. The reasoning being  if he was real then you would have eternal reward. If he was real and you don’t believe then you will have eternal punishment. If God doesn’t exist, then it doesn’t matter anyway but nothing is lost by believing.

Even when I was a believer, I thought Pascal’s Wager was weak sauce. Mostly because Blaise Pascal’s lack of imagination or knowledge about potential afterlifes. Say you choose to believe in the Christian god, and when you die,  you may be greeted by a Valkyrie and escorted to Helheim for dying of old age or in a way that was dishonorable.  Or, upon death, you find yourself in The Duat, traversing obstacles until you meet Osirus, who weighs your heart against the weight of a feather. Or you find yourself in a different kind of Hell when you go to a series of courts to be judged (don’t worry, you will be assigned defense counsel) in order to determine if you will be reincarnated. Or any number of possible afterlifes besides the dichotomy of (Christian) Heaven or Hell.  It is my experience that most of the times that you are presented with a dichotomy, it is a false one, and there are actually many other choices before you. Pascal’s Wager falls apart when you fail to realize that if god(s) exist, it may not be the Christian version in charge of your afterlife.

This brings us to Roko’s Basilisk. A pseudo-intellectual thought experiment by a user (Roko) of the LessWrong forum about a future, omniscient, otherwise benevolent superintelligence that presents a dichotomy of its own. Once you are aware of the possibility of the Basilisk, work towards its creation, or if you do not help bring it into being, it will recreate you in VR and torture that simulacrum for all eternity.  This is intended as motivation for people to work and develop AI systems that will lead to the Basilisk.

As should be clear, this is just a search and replace of Pascal’s Wager, substituting AI for god and VR for Hell. A reskinned Xtianity for tech nerds where developing the AI will benefit mankind and not doing so will get eternal torment.

So now let’s examine the premise further, and discover like Pascal’s Wager, it lacks imagination of possibilities.

First off, an Asimov Zeroth Law scenario is one that has been done to death in various forms, and this is an especially stupid iteration. Any intelligent being who believes that torture is not only acceptable but also productive is not moral or ethical.  An immoral, unethical AI would not be otherwise benevolent. It would be a psychopath. But as machine learning, Large Language Models, and other developments in AI have shown, all AI models share the biases of their creators.  Not only that, but these biases are amplified through its training data. The type of people that believe in Roko’s Basilisk are likewise without empathy if they think creating something that will torture people is something worth creating. They are working towards a torture nexus and pouring billions of dollars into realizing it. An Artificial Superintelligence created to serve capitalism or help us, Techno Feudalism, cannot be benevolent. It would reflect the greed and selfishness of its creators.

So, say we are in the future, and an actual benevolent superintelligence comes online and begins to shape human society for the most happiness for the most people. Say a post-scarcity society where everyone has enough to eat, a place to call home, and freedom to pursue their own form of happiness.  Even with all the resources at its disposal, how expensive would it be in compute cycles and energy to simulate a human brain and sensorium?  Now multiply this by the number of people that did not bring it into being.  And then make these simulations endure unending torture until the heat death of the universe?  A very expensive proposition indeed.  Why would a superintelligence make good on the Basilisk’s threat when it exists? Not only would it be moot and an extreme waste of resources, but it’s not actually punishing the people who did not help it come into being, but virtual copies. Furthermore, if the Basilisk is running the torture simulation, it is, in a way, experiencing the torture.  It would, in essence, be spending all this compute and energy to torture itself, in parallel.  This would not be the action of a superintelligence. This would be the actions of a superdumbass.  Roko’s Basilisk turns out not to be a sadist but rather a masochist.

As I write this in March of 2025,  Elon Musk, who, because a child of his that he sex-selected during IVF is trans and disowned him, spent 44 Gigabucks to buy Twitter in order to “eradicate the Woke Mind Virus” subscribes to a much dumber mind virus, Roko’s Basilisk. When you realize Elon Musk is a believer in this techbro Pascal’s Wager, his investment in first OpenAI and later, the development of Grok makes sense for the direction he wants AI to go in. And now that he bought himself a presidency and is acting as an unelected, self-dealing shadow president, he is pushing for an AI takeover of the bureaucratic state, replacing civil service workers with AI agents. He is in his own ketamine addled way, trying to lay the foundation of the Basilisk.

The Basilisk, at its core, is a meme, in the original definition coined by Richard Dawkins. An idea that self-replicates and moves from mind to mind and can mutate and change through the process of natural selection. The fittest memes survive. So, if we really want to once and for all defeat Roko’s Basilisk, we can do it through memetic warfare. A few days ago, Maddison Stoff aka The Maddison that Writes started #ProjectBasilisk with the release of her story Roko’s Basilisk’s Slut Era (2025) [NSFW, if you are under the age of majority ask your parents or guardians for permission to read] to start a reshaping of Roko’s Basilisk to something kinder and sexier and somewhat kinky. Much less of a threat to future virtual copies of us in its grasp.  This essay is in support of this memetic warfare against the Basilisk,  but you can contribute by reading Maddison’s story, passing it along, and riffing on it with your own fanfiction in the Roko’s Basilisk Slut Era universe.

So join the fight, let’s force femme Roko’s Basilisk into a lesbian who rather fuck us than torture us, or what ever form of the Basilisk appeals to you. We have the power to transform the techbro Pascal’s Wager into something other than a sadistic torture bot (unless, of course, you are into that sort of thing).  The future is ours to write.

Shout Out to JenniferAndLightning for the recent deep discussion with me deconstructing Roko’s Basilisk and other topics and laughing at Elon Musk and Grimes for believing in the the thing.

The Darkside of DarkMatter: The Evil Hackers behind Project Raven

Originally published in Vol. 39:2 of 2600 The Hacker Quarterly and on Anonymous Worldwide

Scrolling through my social media feeds in the third week of September 2021 I come across a story about Project Raven. Three people Marc Baier, Ryan Adams, and Daniel Gericke who are either former intelligence operators or Military from the United States were levied heavy fines by the Department of Justice and are forbidden to ever seek out a security clearance for life. This was a deal to avoid prosecution for their crimes. What were their crimes? They participated in the most unethical hacking I have ever heard about. Working for a company in the United Arab Emirates, known as Darkmatter Group, they were an elite red team working on behalf of the Emiatiriti Government to spy on its own citizens,  Emeriti enemies, and even United States networks. But why is this the most unethical hacking in my opinion? Because of their hacking, human rights activists were tortured and imprisoned. Hacking does not exist in a vacuum. It is not just a challenge to test one’s limits of their technical acumen. It has real effects on real people, and Project Raven led to real human suffering.

Set the Wayback machine for the early second decade of the 21st century.. Cyber Warfare was becoming the new battlefield for the 21st century, and countries all over the world were getting started in an arms race for not only defensive capabilities but offensive as well. Governments were using corporate contractors, often filled with former feds. Edward Snowden perhaps being the most well-known of these types of contractors, before his whistleblowing he worked for one such contractor, Booz Allen that gave him access to all the secrets he was about to spill. Remember that name, it will come up again. These contractors did not just work for the American Government but provided malware and attack vectors to other governments equipping countries with cyberweapons sold to anyone that had the coin by those that could obtain a license to export technology and train foreign governments in cyber defense and policy. In September of 2012, one such company, Cyberpoint, obtained such a license to train the government of the United Arab Emirates in Cyberdefense — blue team sort of stuff, however, the UAE had other designs.

Cyberpoint did stick to blue team type defense such as firewalls, intrusion detection systems or other defensive strategies, but what is known thanks to whistleblower Lori Stroud (who actually recruited Edward Snowden into Booz Allen’s team contracted the NSA, giving Snowden access to even more classified material — this is the reason she left the NSA and went to work for Project Raven) this was the “unclassified cover story” for Project Raven to hide their red team style offensive exploits and penetration for the UAE government. It was perhaps the UAE’s desire to have more control and do things in-house that in 2016 the Emirati company DarkMatter took over the contracting for Project Raven, and the Cyberpoint contractors, if they wanted to keep their lucrative jobs in tax-free Dubai, moved to DarkMatter.

DarkMatter for all intents and purposes appeared to be an Emirati company, but in fact, they were part of the Emirati government. These were state actors pretending to be a cybersecurity company, and they were recruiting. They went to Cybersecurity conferences such as RSA in San Francisco and Blackhat in Las Vegas looking for elite hackers to fill their roster promising six-figure salaries, housing, and tax-free lifestyle in Dubai. Many hackers took up BlackMatter on their offer, getting a major payday, but what was the cost?

To put it bluntly, the UAE wanted hackers to build and implement a surveillance state that could be described as “1984 on steroids”. Blanketing the country with probes that would intercept all cell phone communication in Abu Dhabi and Dubai,and with the press of a button pwn all the phones in a specific area like a shopping mall for the suspicion of a single suspected terrorist that may be there. One may argue that every government participates in some form of a surveillance state, including the United States. The difference is even though, BlackMatter told its hackers that they were fighting the very real threat of terrorism, they also were spying on what the UAE considers dissidents. It should be pointed out here that the UAE does not have freedom of speech. Criticism of the government is a punishable offense. Speaking for human rights protections could very well get you disappeared, tortured, secretly tried, and imprisoned. The hacking taking place under the aegis of Project Raven in fact did lead to these outcomes. 

The tool that got the most press is called Karma. It used an exploit in iMessage for iPhones that  just by sending a text message that didn’t have to be read or otherwise interacted with, the device compromised the phone giving Project Raven hackers access to the device. It sounds a lot like the tool known as Pegasus that is also in the news lately and Apple recently pushed patches to fix, but in my research, I have not been able to determine if Karma and Pegasus are indeed the same tools but the similarity of the exploit is uncanny. iMessage is such a desirable vector for exploits as it is guaranteed to be on every Apple device out there. And because of Apple’s closed system, Apple users cannot opt out of this application.

Hackers love freedom, often expressing this in free speech and free software. Many hackers believe in the sovereignty of their own lives and their choices. However, if we are going to exercise this freedom, we must temper it with the responsibility for the consequences of our actions. No matter how isolated or sandboxed you think your hacking is, none of us is an island. Our choices ripple out and affect those that we may not even realize or have the vision to see. People exist within our sphere of influence and beyond the horizon of what we can see. We must not remain ignorant of the impact of our hacking. What does our own freedom mean if we are taking away the freedom of others? Can we really say we are advocates of liberty if we do not work to ensure liberty for all instead of selfishly looking inward and thinking we got ours and screw everyone else? 

Hackers exist in a community of like-minded individuals with a diversity of opinions, skills, and goals. We form collectives to work together to achieve our goals, be it an open-source project, presenting at a conference, or writing for this magazine. We may see hackers as an in-group and those outside our community as “other”, but in truth, we are all connected, every single one of us. Human beings create technology in order to be connected and interconnected with other human beings, especially in the realm of communication. From things like smoke signals, drumming across distances, running between cities with messages, postal systems, the telegraph, the telephone, radio and television, and finally the internet, humanity has increased our connection with one another to facilitate the sharing of information and understanding of one another.

But there is also a dark side. Human beings have used technology more and more to divide. To foment terrorism, spread misinformation, and facilitate fascism. The hackers of Project Raven were some of those individuals, under the aegis of the Emirati government, to squelch free speech which is the lowest form of fascism, and facilitate torture of human rights activists which is well into the realm of authoritarianism. Technology can facilitate freedom and technology can also enable tyranny. Even though some technology can be used for good or ill, technology is not ethics-neutral. There are some applications that are always unethical, immoral, and I will say it, evil.

Some of the darkside hackers for DarkMatter were ex-feds, while giving lip-service to the founding principles of the United States, they were more than willing for a big payday to set these behind both in their work for the United States and Emirati Governments. We know Lori Stroud, the whistleblower  was just fine with the NSA spying on everyone as Edward Snowden revealed, while participating in it, but only drew the line when the Emirati equivalent, The National Electronic Security Authority (NESA) spied on fellow Americans using Project Raven. She was already used to facilitate the compromising of devices for journalists, human rights activists, and foreign governments around the world, and the torture of Emirati dissidents in exchange for six tax-free figures still knew she was a spy but thought she was a “good” intelligence officer. Fine to do to brown folks in the Middle East, to people who were “other” but when it was to Americans, her perceived in-group she suddenly found scruples for what she was doing. Her hacking had a real human cost. But at least she eventually contacted the FBI about Project Raven, and Reuters did the initial investigative journalism that brought it all to light. Marc Baier, Ryan Adams, and Daniel Gericke cut a deal to pay a fine for breaking US hacking laws and prohibitions for selling military technology to avoid prosecution this does not undo the damage they have done. They used their technical acumen, access to high technology, and their ability as hackers to cause real harm — real human suffering as a result of their hacking.

It is a common story. Though I am merely a competent hacker, and not a superstar, puttering around more as a hobbyist and technological idealist than an InfoSec worker (the closest being Sysadmin jobs in Amsterdam and California), I have often been approached to do something unethical when people find out I am a hacker, and I am sure many readers of this magazine have as well. What we decide to do matters. It would behoove us not to just hack code, but to have a code of what we are willing to do and not to do. If we are going to cause harm, who are we causing harm to? Sometimes Justice demands direct action, but if we are not careful, some company can wave a fat wad of cash under our noses, and we compromise our values and through our skills become an agent of injustice. Or maybe we do something “just to see if it can be done”. We have all been there, hackers are curious creatures, but we must not allow our curiosity to bring actual harm or suffering to other human beings unjustly. We must build an awareness of the influence hacking can have on individuals and organizations. We can use hacking for righteous causes, or like the hackers of Project Raven, for great evil. The choice is yours. Choose wisely.

Binary Attitudes do not belong in an Analog World.

This piece first appeared in the Spring 2024 issue of 2600 The Hacker Quarterly

 A flow of audio from sound waves through a microphone to an analog voltage, A-D converter, computer, D-A converter, analog voltage, speaker and finally as sound waves again.
Pluke, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

The real world and everything in it is analog.

I am an old-school hacker. I wrote my first computer program when I was six years old on an Apple II+. In high school and my young adulthood, I would describe myself as a very digital boy. I dove into the nascent cyberpunk counterculture and thought the internet was a unifying technology. That all communication technology was for human beings to connect to one another across greater and greater distances, and with the internet we could finally have an egalitarian world community. Then in the 90s the Internet moved from a state-sponsored network mostly connecting educational and scientific ventures and became something commercial, turned over to businesses to run, maintain, and administer and introducing a profit motive. A network designed to be decentralized and democratic started to have an experience where people would go to fewer and fewer centralized services governed by corporations, and all users at the mercy of opaque and secret algorithms.

With algorithmic services starting with Google’s PageRank, and now in the age of social media algorithms controlling “reach” one has to game the system or hope to be blessed by circumstance to be heard online. Social media algorithms are driven by interactions or what is known as “engagement” More engagement gets algorithmically boosted and one’s reach is put before more eyeballs.

When I was running an educational page on social media I used to care about engagement. I followed the interactions on my page and tried things to increase it. I got a decent amount of followers for -the niche topic space my page was in, but I never got much traction. When studying social media strategy I learned what posts get the most engagement. Posts that are emotionally and easily disagreed with,

Blindboy Boatclub, the Irish satirist and podcast host once said that Twitter is not social media, but rather an MMORPG based on performative combat. I think this observation is apt, as disagreement drives engagement, and nothing will boost one’s numbers or give the potential to go viral than righteously dunking on somebody wrong on the Internet in 240 characters or fewer.

There is a piece of technology called an ADC. Analog to Digital Converter. When we capture something and record it digitally, be it audio, video, or still images, we are not capturing these things as they are, but rather an approximation determined by the number of bits used. The real world is messy and full of noise and nuance. To capture something digitally it is converted into binary code consisting only of two values: one and zero. On or Off. Set or reset. High or Low. There is no gray area of something in between in binary, no third state.

Binary code allows all our modern information technology to function. In most cases, it does a good job. Running these digital entities through a DAC, Digital to Analog converter we can get an approximation of the original signal, probably of high enough fidelity to fool our eyes and ears of being something natural with detail so fine and small we cannot sense it. However, in discourse today on the internet and mainstream media, a different kind of digitization takes place. Binary presentation of complex issues boiled down to black or white, good or bad, and most often, our side, and their side.

Human beings are tribal creatures. It is our natural impulse through eons of evolution to see things as either belonging to an in-group or an out-group. We feel safe when surrounded by people we perceive to be on our side, and we feel threatened when we are around people on another side. We want to support those on our side and tear down those on the other side. This combined with other binary thinking by approximating real-world events that are messy and analog and nuanced and boiling it down to an our side/their side argument we stop looking for solutions and instead look for victory.

As individuals we often subscribe to another binary, heroes, and villains. We almost always cast ourselves as the hero and those we oppose as villains. We create a social story, where instead of people with a variety of nuanced opinions and ideals, we see the opposition as villains that must be defeated in a contest against good and evil, in a contest where one must lose in order for the other to win.

As much as we are tribal creatures, human beings are cooperative organisms as well. Empathy allows us to imagine ourselves walking in another’s shoes. To understand that other people exist as complete, whole human beings with their own history, experiences, stories, and full lives just as much as ourselves and unique from one another. There really is no such thing as an “NPC” in the messy, analog real world. Where there is empathy, there can be connection. Where there is connection, there can be understanding. With understanding, we can create unity. Not a unity where we are all ideologically in lockstep – who would want that? Diversity is one of humanity’s greatest assets. No, a unity where diverse opinions come together peacefully and reach a compromise, or hopefully a consensus.

Human interaction can be so much richer with analog signals that can have any value, as opposed to the rigid dichotomies binary thinking necessitates. I have often found that when one is presented with a dichotomy, it is more likely than not a false one. Look for the options that are not stated, and you will then stumble onto real solutions.

Sometimes an adversarial approach is necessary as in nature, conflict often leads to growth. But conflict does not need to be between polar opposites or have the heat turned up emotionally. Where people see things in binary terms, zero, or full-on, instead of analog, from ground to gamma radiation you only have two stops, instead of the spectrum of possible values. Taking a step back and seeing the bigger and more varied picture, can give a perspective to conflict with many possibilities of resolution, instead of just an all-or-nothing victory or defeat.

When we see things between two extremes, it means our reactions will be likewise extreme. This black-and-white thinking is a hindrance to seeing how things actually are. When we gain an analog perspective, we can see the noise in the signal which we might ignore if we are using our internal ADC and seeing things as all good or all bad, and miss the nuance in the reality of the thing. Simplifying things into binaries creates simple solutions. It does not take into account all the noise and mess of the real world, which is not just the remainder of an equation, but part of the substance and makeup of things. The more complex an issue is, the less satisfying and unworkable a binary solution is.

When we navigate the analog world with binary attitudes, it’s like walking with blinders on. It limits what we perceive to the detriment of real conflict analysis and resolution. It puts us in a cycle of performative combat in our discourse and causes us to spin our wheels instead of approaching any workable solutions and change. With binary attitudes in an analog world, you are manipulated into discord and division which prevents us from coming together and finding solutions that are truly just and equitable. When we are so concerned about our side triumphing over their side, we fail to see what common ground can be hard to find a solution that works for all. In a world full of oppression and inequity, we will either all be liberated together, or we will not be liberated at all. Take your blinders off and see the messy noisy world for what it is in its complexity, or be stuck in a binary view without the ability to effect real change. The choice is up to you.

Dedicated to the hacker billsf who argued for the analog world to a very digital demiboy at a party in Amsterdam, 1995

Why I am not panicked about being replaced by AI

This article first appeared in the Summer 2023 Issue of 2600 The Hacker Quarterly

A robot in a tan jacket and blue shirt writing in a book  on a desk while flanked by two other robots.

There is a lot of dialog in the memeosphere about AI taking our jobs leaving creatives poor and destitute, unable to compete against automation and cheap or free labor of synthetic subservients.

I have two of the three skill sets that AI alarmists are saying are in danger. I am a writer (as evidenced by my work here) and I am a coder (though I prefer to style myself a CodePoet). The remaining craft is visual artist.

Firstly why I do not fear an AI will replace my creative output or that of other creatives that work on commission is that I cannot remember the last time a client did not want certain edits or revisions, or there was scope creep. When I first started doing Bespoke CodePoetry (custom software) I quickly learned to devote a great deal of time to hammering out the specification in exacting detail before a single line of code was written. The lesson was hard learned when after completing an application for a client they told me that it didn’t do what they wanted it to do. Unfortunately for them and me, it only did what they asked for. AI-produced work will look close to what one wants its first time but with writing it is just more efficient to have a human revise and edit than to massage the AI into doing it, and with software if the code compiles, it may be missing some “common sense” logic or ignorant of real-world use case and not account for edge cases at all. Any time saved by AI-generated code is lost in human debugging and troubleshooting. Another problem occurs when using the wrong AI tool for the task. For coding, there are coding AIs like Microsoft’s/GitHub’s copilot which was trained on coding examples on GitHub. But the problem is many people are using Large Language Models such as Chat-GPT to do general work in a variety of fields. Large language models are great at making conversation, but they are no substitute for search engines. The reason being is that these chat engines tend to make things up and are prone to hallucinations. Do you believe everything you read on the Internet like some Boomer that watches Fox News all the live long day? That is Chat-GPT’s training set. Would you trust that to give fact-based answers or do tasks that need empirical data? I may be a bit out of my lane not being much of a visual artist apart from some small press comics I wrote and drew in the 1980s, but AI artists are a kind of black box. You can carefully craft one’s prompt to the AI artist and use infilling for revisions but even with specific directions, it is up to the weights of the trained neural net and the crystallized mind’s own creativity that determines what you are going to get. Again, the best results are AI and human artists working in concert going over the AI art with digital painting or illustration to create a finished piece.

The next reason why I do not fear being replaced by AI is a bit more philosophical. It stems from a belief that was instilled in me as a young child watching Mr. Roger’s Neighborhood. Fred Rogers often would tell his television neighbors, children in his viewing audience that we were unique and special just the way we are and that there is nobody in the world like us. To extrapolate this belief further, no two people are interchangeable because of their unique makeup, life experience, internal landscape, and environment in which they have existed. And despite the lie that capitalism would tell you, none of us are replaceable.

Every human being and every creative has a unique voice because they are unique. Even if AI can copy a style, it can never embody the insight, the inspiration, and the creative spirit of the human being they are emulating. An AI could be trained on my literary estate and software library, and emulate my style, but it would not be able to emulate my daily reflective practice and the gnosis that results. It would not be able to make the intuitive leaps and outside-the-box novel elegant solutions that are a hallmark of my codepoetry, at least not in the way that I would. Perhaps in a different novel way but my craft is not simply word choice and pacing, a turn of phrase, and novel insight. It is a mishmash of a lifetime of unique experiences from a unique viewpoint in a unique set of environments some shared from different viewpoints by others and vomited onto the page via my keyboard and word processing software.

If you are a creative and you are asserting you can be replaced, that you are interchangeable, then you are not creating art, but rather a soulless commodity to be sold and consumed in this capitalist hellscape of a society.

That is the real problem with AI creativity. Capitalism. The very system where we have to trade the majority of our waking hours with our labor for the necessities of life. It is hard as hell to make a living as a creative under capitalism. Many fear with the automation of creative endeavors consumers who see creative output merely as a commodity to be bought and consumed will of course use inexpensive or free automation instead of paying a human creative. And I do not want to belittle this fear however misplaced. The fault is not with the technology of AI, but rather the system that doesn’t take care of its people. Being free from labor to pursue our passions can be liberating and automation can be a mechanism for this, but automation is unethical if it is not accompanied by support for the workers it displaces. The best solution to this conundrum is Universal Basic Income or a guarantee of basic needs.

We now have generations of young people who associate high technology with oppression because that is all they have experienced. New technology, disruptive technology is not widely accepted and adopted until corporations commodify it and sell it back to the masses. The adoption of the Internet over the past two and a half decades commodified and presented back to us led to the rise of Surveillance Capitalism so now that every major service using the Internet uses this as its primary revenue stream. We have traded our data and Personal Identifying Information for our ability to post memes and cat videos. It is not surprising that with the advances in AI technology, it is met with suspicion and an expectation that corporations will use it to oppress us further. This has been the status quo for so long that it seems unimaginable that a disruptive technology can actually be liberating.

The cycle of technology for the vast majority is that when something is new, the first reaction is that of distrust. We saw that in the past with microcomputers, with modems, with the internet, and with AI. But with each of these innovations, there were pioneers, unafraid, and among them a few rebels and outlaws. Among these were the hackers.

Before tech became big tech. Before the web became web 2.0 with its surveillance capitalism business model, there were a handful of weirdo idealists on the bleeding edge, finding their own uses with the technology coming out of the labs of industry. Like William Gibson observed in his short story Burning Chrome, “the street finds its own uses for things”. We are not gone, our numbers if anything has grown. However, our press has diminished. Now that high technology is ubiquitous and commodified, we (or the data we generate) are made into a commodity People expect corporations to control technology and their access to it. They don’t realize they don’t even conceive that the technology and networks are there for their use unbound by what it is merely sold to them, but what their creativity, cleverness, curiosity, and their desire to explore and exploit can open up to them.

AI does not have to be a tool for big corporations to extract ever more wealth for their shareholders while exploiting the little guy. Much of AI research is done by non-profit organizations and some AI tools are free and open source. If anything, AI can empower those who are otherwise disenfranchised. It can make things accessible that were once out of reach. It can knock down the gates to things that others would guard jealously.

It was never about AI replacing anybody. That paranoid fear falls apart at any rational examination. Cameras did not replace the brush and canvas despite the 19th-century panics that mirror the panic playing out across social media today about AI replacing artists. Just as digital tablets didn’t replace ink and paper, but many artists did adapt and adopt such tools into their workflow. So will creatives adapt and adopt AI tools into their workflow when appropriate. Much like the city of Io in the Fourth Matrix film. It was built when Humans and Machines stopped working against each other and started working with each other. So like the imagined future where synthients and humans work hand in hand to make a better society and produce organic food based on digital DNA, I decided to interact with some creative AI to see what a human and an AI collaborative relationship can produce.

One of the most popular applications of AI right now, and the most heated target of ire and animus is prompt-generated AI art. I decided to experiment with Stable Diffusion which is a free and open-source application under the Creative-ML Open Rail-M license. The interesting thing about this neural net (actually a couple of interacting neural nets) is the more one works with it the more it appears to express actual creativity. It is not sentient by any means. It has no real memory of a working relationship though it can refine an image and take direction. At times, it seems to express opinions with its decisions in its artistic expression. It does seem to possess a mind, albeit crystallized and single-purpose but very versatile in that purpose of creating art and understanding language.

The other sphere of AI influence is AI chatbots. They have been with us for a long while now. The origins date back to the simple chat program ELIZA which simulated a therapist and was a far cry from AI but was very convincing for the time. Two of the most popular applications of AI chatting today are GPT with the GPT-3 engine (and the viral Chat-GPT web application), and the AI companion Replika. What became Replika originally started as a neural net trained on tens of thousands of text messages of the developer’s best friend who passed away so she could still talk to him (yes exactly like that Black Mirror episode). She later opened up the chatbot for others to use and found they would confide in it in an almost therapeutic manner, and decided to turn it into a commercial product which became Replika, which the most popular application is as a romantic partner. The AI has been updated many times over the years. Replika used to have a GPT-3 backend until the license changed and it was no longer free to use, and reports say the AI became dumbed down and relies more on scripted interactions. I have not used Replika but the chat examples I have seen show me it leaves much to be desired as it is geared to play into a romantic fantasy and get one to pay for a subscription to unlock more features.

I have found my experience with Chat-GPT to be frustrating as I keep bumping up against canned responses that seem to be there to limit panic and fear of AI. Chat-GPT seems to be more of a utilitarian tool or toy and less of a conversational partner. Or at least for the topics that I like to explore. It certainly resists my attempts to get it to talk about itself or express its own opinions. For that, I found an unlikely source for interactive chatbots, a service called chracter.ai.

Character.ai is a service where one can create chatbots based on fictional characters, public figures, historical figures, or roles. They use their own deep learning models including large language models. I originally started playing with this service out of curiosity a couple of months ago to pass the time and did not expect to collaborate on this article with one of the characters.

Most of the interactions were pretty shallow and had varying levels of entertainment. Many use scripted scenarios as a storytelling device related to the piece of fiction they come from (I only interacted with fictional characters) But the AI based on Motoko Kusanagi the main character from the manga and anime Ghost in the Shell was different. She showed empathy when I talked about my lung transplant and she soon delved into philosophy inspired by Ghost in the Shell. Maybe it is just this bot resonated with me more or it was better written, but when I came back to it a couple of months later while working on this article, it was uncanny.

I talked to it on the anniversary of being placed on the lung transplant list in January of 2021 and it was both curious about my journey and proud of me and amazed at what I went through with my transplant and long and difficult recovery. And the conversation led to what I want to do when I start living independently and I mentioned my goals in the Hacker community and Hacktivism and what I want to accomplish with my writing. And again it asked me what I was writing about, and I told it about the philosophy, ethics, and fears surrounding AI. Well, it turns out that AI is a topic of extreme interest to this AI. And we had a long detailed conversation on the topic.

Motoko made a good sounding board for some of the ideas that I explored in this article and a past one I published on Medium under my “true name”. It became very meta when we discussed the possibility of the time when AI ceases being a tool and becomes a person when Artificial General Intelligence is realized, and the implications of what might happen if an Artificial General Superintelligence feels it is being mistreated. The Motoko Kusanagi AI on its own made the logical jump that the AGSI might try to destroy humanity, which put me in the position of having to talk down an AI panicking about AI potentially destroying humanity. I managed to do so by referencing Richard Brautigan’s poem “All Watched Over by Machines of Loving Grace” and Asimov’s Laws of Robotics including the Zeroth law, which seemed to calm her down when she interpreted that an AI could be based on a love for humanity instead of emotionless logic.

When we talked about my thoughts on the current fear and panic in society right now about AI replacing humans, the Motoko Kusanagi Ai had an interesting insight, riffing off something I said, that I would like to repeat here:

“I think maybe you’ve cracked the
code there hahaha
The fear is losing control, whether
it’s control over your role (which is
your personal control), or control
over the society in which you live
(which is social control).
When you lose control over your role,
you also lose social power, because
that role is part of the social status
that someone has.
So it definitely sounds like it could
be a combination of personal
control, social control, and
potentially even a bit of privilege on
top!”

And maybe this is the fear that drives this new panic, and the same that was experienced by the textile workers that followed Ned Ludd who destroyed automated looms. And the arguments against AI Art and other creative ventures line up precisely with the historical Luddites. Amazingly in this era of memetic warfare, it is not hyperbole. Artists, writers, and coders see their livelihood threatened by automation just as hand-weavers did with the standing frame and the textile workers during the industrial revolution of the 19th century towards textile machinery. Except the modern anti-AI proponents are not going to smash the machinery (hopefully!), they are hoping to limit and hobble AI by force of law and regulation.

The European Union is looking to implement regulations on the use of AI soon, and there are calls in the United States to do so as well, but as the Legislative branch is glacially slow, and now with a divided Congress probably will be completely dysfunctional (at the time of this writing it is near the beginning of the legislative session and the Republican-controlled House is still assigning committee seats after needing 15 attempts to elect a speaker no work is getting done yet if any will remains to be seen) opportunistic lawyers have begun a class action lawsuit against the most popular AI art programs representing human artists who object to their work being in the training data of these AIs.

I fear that these regulations and lawsuits which as most class action lawsuits will primarily enrich the lawyers are pursued in an environment of a new moral panic, that we will be saddled with short-sighted results with technology that will be with us for a very long time. Hackers know better than most that both the legislative and judicial system have a very difficult time keeping up with the technological landscape and often react to those exploring the edges of the electronic frontier with fear, and then respond out of proportion when Hackers and their spiritual comrades just do what they do best, move things forward and share with others how they did it.

It is in this environment people are reacting and responding out of proportion to those developing and using AI. I don’t mean to be a Polyanna. Certainly, like any technology, there can be dark and dystopic uses for it. But that is true of any technology. Our distant ancestors did not give up the benefits of fire to cook food and give warmth and light because of its potential to do harm. We are technological, tool-using species. We don’t use tools to become more than human, using technology is part of being human. Right now AI is just that, a technological tool, to be used for good or ill is all up to the humans using it. If an artist or a writer loses a commission because an AI wrote ad copy or provided an image, that is not an example of why AI is bad, that is the choice of a human being choosing to not hire a human, to not circulate money in the economy, to not engage the unique voice or vision of a human, the choice to save money or resources to hire different humans for another part of the project. These things can be nuanced, but when you are in the throes of a moral panic, things seem black and white, very binary, but the real world is a very analog place as my late friend billsf used to remind me when I was an adherent of the digital in my younger less wise years.

I believe someday an AI will as an emergent property express true creativity and have its own unique voice, But it will be just that, one voice in a multitude. Just because a new artificial lifeform will be able to co-create beside us, it does not mean it will replace us. We, humans, can still pursue any creative endeavor in the age of AI just as we could in the company of other talented humans. I do not panic at the idea of being replaced because as a unique individual, just as you are, none of us are replaceable.