The Darkside of DarkMatter: The Evil Hackers behind Project Raven

Originally published in Vol. 39:2 of 2600 The Hacker Quarterly and on Anonymous Worldwide

Scrolling through my social media feeds in the third week of September 2021 I come across a story about Project Raven. Three people Marc Baier, Ryan Adams, and Daniel Gericke who are either former intelligence operators or Military from the United States were levied heavy fines by the Department of Justice and are forbidden to ever seek out a security clearance for life. This was a deal to avoid prosecution for their crimes. What were their crimes? They participated in the most unethical hacking I have ever heard about. Working for a company in the United Arab Emirates, known as Darkmatter Group, they were an elite red team working on behalf of the Emiatiriti Government to spy on its own citizens,  Emeriti enemies, and even United States networks. But why is this the most unethical hacking in my opinion? Because of their hacking, human rights activists were tortured and imprisoned. Hacking does not exist in a vacuum. It is not just a challenge to test one’s limits of their technical acumen. It has real effects on real people, and Project Raven led to real human suffering.

Set the Wayback machine for the early second decade of the 21st century.. Cyber Warfare was becoming the new battlefield for the 21st century, and countries all over the world were getting started in an arms race for not only defensive capabilities but offensive as well. Governments were using corporate contractors, often filled with former feds. Edward Snowden perhaps being the most well-known of these types of contractors, before his whistleblowing he worked for one such contractor, Booz Allen that gave him access to all the secrets he was about to spill. Remember that name, it will come up again. These contractors did not just work for the American Government but provided malware and attack vectors to other governments equipping countries with cyberweapons sold to anyone that had the coin by those that could obtain a license to export technology and train foreign governments in cyber defense and policy. In September of 2012, one such company, Cyberpoint, obtained such a license to train the government of the United Arab Emirates in Cyberdefense — blue team sort of stuff, however, the UAE had other designs.

Cyberpoint did stick to blue team type defense such as firewalls, intrusion detection systems or other defensive strategies, but what is known thanks to whistleblower Lori Stroud (who actually recruited Edward Snowden into Booz Allen’s team contracted the NSA, giving Snowden access to even more classified material — this is the reason she left the NSA and went to work for Project Raven) this was the “unclassified cover story” for Project Raven to hide their red team style offensive exploits and penetration for the UAE government. It was perhaps the UAE’s desire to have more control and do things in-house that in 2016 the Emirati company DarkMatter took over the contracting for Project Raven, and the Cyberpoint contractors, if they wanted to keep their lucrative jobs in tax-free Dubai, moved to DarkMatter.

DarkMatter for all intents and purposes appeared to be an Emirati company, but in fact, they were part of the Emirati government. These were state actors pretending to be a cybersecurity company, and they were recruiting. They went to Cybersecurity conferences such as RSA in San Francisco and Blackhat in Las Vegas looking for elite hackers to fill their roster promising six-figure salaries, housing, and tax-free lifestyle in Dubai. Many hackers took up BlackMatter on their offer, getting a major payday, but what was the cost?

To put it bluntly, the UAE wanted hackers to build and implement a surveillance state that could be described as “1984 on steroids”. Blanketing the country with probes that would intercept all cell phone communication in Abu Dhabi and Dubai,and with the press of a button pwn all the phones in a specific area like a shopping mall for the suspicion of a single suspected terrorist that may be there. One may argue that every government participates in some form of a surveillance state, including the United States. The difference is even though, BlackMatter told its hackers that they were fighting the very real threat of terrorism, they also were spying on what the UAE considers dissidents. It should be pointed out here that the UAE does not have freedom of speech. Criticism of the government is a punishable offense. Speaking for human rights protections could very well get you disappeared, tortured, secretly tried, and imprisoned. The hacking taking place under the aegis of Project Raven in fact did lead to these outcomes. 

The tool that got the most press is called Karma. It used an exploit in iMessage for iPhones that  just by sending a text message that didn’t have to be read or otherwise interacted with, the device compromised the phone giving Project Raven hackers access to the device. It sounds a lot like the tool known as Pegasus that is also in the news lately and Apple recently pushed patches to fix, but in my research, I have not been able to determine if Karma and Pegasus are indeed the same tools but the similarity of the exploit is uncanny. iMessage is such a desirable vector for exploits as it is guaranteed to be on every Apple device out there. And because of Apple’s closed system, Apple users cannot opt out of this application.

Hackers love freedom, often expressing this in free speech and free software. Many hackers believe in the sovereignty of their own lives and their choices. However, if we are going to exercise this freedom, we must temper it with the responsibility for the consequences of our actions. No matter how isolated or sandboxed you think your hacking is, none of us is an island. Our choices ripple out and affect those that we may not even realize or have the vision to see. People exist within our sphere of influence and beyond the horizon of what we can see. We must not remain ignorant of the impact of our hacking. What does our own freedom mean if we are taking away the freedom of others? Can we really say we are advocates of liberty if we do not work to ensure liberty for all instead of selfishly looking inward and thinking we got ours and screw everyone else? 

Hackers exist in a community of like-minded individuals with a diversity of opinions, skills, and goals. We form collectives to work together to achieve our goals, be it an open-source project, presenting at a conference, or writing for this magazine. We may see hackers as an in-group and those outside our community as “other”, but in truth, we are all connected, every single one of us. Human beings create technology in order to be connected and interconnected with other human beings, especially in the realm of communication. From things like smoke signals, drumming across distances, running between cities with messages, postal systems, the telegraph, the telephone, radio and television, and finally the internet, humanity has increased our connection with one another to facilitate the sharing of information and understanding of one another.

But there is also a dark side. Human beings have used technology more and more to divide. To foment terrorism, spread misinformation, and facilitate fascism. The hackers of Project Raven were some of those individuals, under the aegis of the Emirati government, to squelch free speech which is the lowest form of fascism, and facilitate torture of human rights activists which is well into the realm of authoritarianism. Technology can facilitate freedom and technology can also enable tyranny. Even though some technology can be used for good or ill, technology is not ethics-neutral. There are some applications that are always unethical, immoral, and I will say it, evil.

Some of the darkside hackers for DarkMatter were ex-feds, while giving lip-service to the founding principles of the United States, they were more than willing for a big payday to set these behind both in their work for the United States and Emirati Governments. We know Lori Stroud, the whistleblower  was just fine with the NSA spying on everyone as Edward Snowden revealed, while participating in it, but only drew the line when the Emirati equivalent, The National Electronic Security Authority (NESA) spied on fellow Americans using Project Raven. She was already used to facilitate the compromising of devices for journalists, human rights activists, and foreign governments around the world, and the torture of Emirati dissidents in exchange for six tax-free figures still knew she was a spy but thought she was a “good” intelligence officer. Fine to do to brown folks in the Middle East, to people who were “other” but when it was to Americans, her perceived in-group she suddenly found scruples for what she was doing. Her hacking had a real human cost. But at least she eventually contacted the FBI about Project Raven, and Reuters did the initial investigative journalism that brought it all to light. Marc Baier, Ryan Adams, and Daniel Gericke cut a deal to pay a fine for breaking US hacking laws and prohibitions for selling military technology to avoid prosecution this does not undo the damage they have done. They used their technical acumen, access to high technology, and their ability as hackers to cause real harm — real human suffering as a result of their hacking.

It is a common story. Though I am merely a competent hacker, and not a superstar, puttering around more as a hobbyist and technological idealist than an InfoSec worker (the closest being Sysadmin jobs in Amsterdam and California), I have often been approached to do something unethical when people find out I am a hacker, and I am sure many readers of this magazine have as well. What we decide to do matters. It would behoove us not to just hack code, but to have a code of what we are willing to do and not to do. If we are going to cause harm, who are we causing harm to? Sometimes Justice demands direct action, but if we are not careful, some company can wave a fat wad of cash under our noses, and we compromise our values and through our skills become an agent of injustice. Or maybe we do something “just to see if it can be done”. We have all been there, hackers are curious creatures, but we must not allow our curiosity to bring actual harm or suffering to other human beings unjustly. We must build an awareness of the influence hacking can have on individuals and organizations. We can use hacking for righteous causes, or like the hackers of Project Raven, for great evil. The choice is yours. Choose wisely.

Why I am not panicked about being replaced by AI

This article first appeared in the Summer 2023 Issue of 2600 The Hacker Quarterly

A robot in a tan jacket and blue shirt writing in a book  on a desk while flanked by two other robots.

There is a lot of dialog in the memeosphere about AI taking our jobs leaving creatives poor and destitute, unable to compete against automation and cheap or free labor of synthetic subservients.

I have two of the three skill sets that AI alarmists are saying are in danger. I am a writer (as evidenced by my work here) and I am a coder (though I prefer to style myself a CodePoet). The remaining craft is visual artist.

Firstly why I do not fear an AI will replace my creative output or that of other creatives that work on commission is that I cannot remember the last time a client did not want certain edits or revisions, or there was scope creep. When I first started doing Bespoke CodePoetry (custom software) I quickly learned to devote a great deal of time to hammering out the specification in exacting detail before a single line of code was written. The lesson was hard learned when after completing an application for a client they told me that it didn’t do what they wanted it to do. Unfortunately for them and me, it only did what they asked for. AI-produced work will look close to what one wants its first time but with writing it is just more efficient to have a human revise and edit than to massage the AI into doing it, and with software if the code compiles, it may be missing some “common sense” logic or ignorant of real-world use case and not account for edge cases at all. Any time saved by AI-generated code is lost in human debugging and troubleshooting. Another problem occurs when using the wrong AI tool for the task. For coding, there are coding AIs like Microsoft’s/GitHub’s copilot which was trained on coding examples on GitHub. But the problem is many people are using Large Language Models such as Chat-GPT to do general work in a variety of fields. Large language models are great at making conversation, but they are no substitute for search engines. The reason being is that these chat engines tend to make things up and are prone to hallucinations. Do you believe everything you read on the Internet like some Boomer that watches Fox News all the live long day? That is Chat-GPT’s training set. Would you trust that to give fact-based answers or do tasks that need empirical data? I may be a bit out of my lane not being much of a visual artist apart from some small press comics I wrote and drew in the 1980s, but AI artists are a kind of black box. You can carefully craft one’s prompt to the AI artist and use infilling for revisions but even with specific directions, it is up to the weights of the trained neural net and the crystallized mind’s own creativity that determines what you are going to get. Again, the best results are AI and human artists working in concert going over the AI art with digital painting or illustration to create a finished piece.

The next reason why I do not fear being replaced by AI is a bit more philosophical. It stems from a belief that was instilled in me as a young child watching Mr. Roger’s Neighborhood. Fred Rogers often would tell his television neighbors, children in his viewing audience that we were unique and special just the way we are and that there is nobody in the world like us. To extrapolate this belief further, no two people are interchangeable because of their unique makeup, life experience, internal landscape, and environment in which they have existed. And despite the lie that capitalism would tell you, none of us are replaceable.

Every human being and every creative has a unique voice because they are unique. Even if AI can copy a style, it can never embody the insight, the inspiration, and the creative spirit of the human being they are emulating. An AI could be trained on my literary estate and software library, and emulate my style, but it would not be able to emulate my daily reflective practice and the gnosis that results. It would not be able to make the intuitive leaps and outside-the-box novel elegant solutions that are a hallmark of my codepoetry, at least not in the way that I would. Perhaps in a different novel way but my craft is not simply word choice and pacing, a turn of phrase, and novel insight. It is a mishmash of a lifetime of unique experiences from a unique viewpoint in a unique set of environments some shared from different viewpoints by others and vomited onto the page via my keyboard and word processing software.

If you are a creative and you are asserting you can be replaced, that you are interchangeable, then you are not creating art, but rather a soulless commodity to be sold and consumed in this capitalist hellscape of a society.

That is the real problem with AI creativity. Capitalism. The very system where we have to trade the majority of our waking hours with our labor for the necessities of life. It is hard as hell to make a living as a creative under capitalism. Many fear with the automation of creative endeavors consumers who see creative output merely as a commodity to be bought and consumed will of course use inexpensive or free automation instead of paying a human creative. And I do not want to belittle this fear however misplaced. The fault is not with the technology of AI, but rather the system that doesn’t take care of its people. Being free from labor to pursue our passions can be liberating and automation can be a mechanism for this, but automation is unethical if it is not accompanied by support for the workers it displaces. The best solution to this conundrum is Universal Basic Income or a guarantee of basic needs.

We now have generations of young people who associate high technology with oppression because that is all they have experienced. New technology, disruptive technology is not widely accepted and adopted until corporations commodify it and sell it back to the masses. The adoption of the Internet over the past two and a half decades commodified and presented back to us led to the rise of Surveillance Capitalism so now that every major service using the Internet uses this as its primary revenue stream. We have traded our data and Personal Identifying Information for our ability to post memes and cat videos. It is not surprising that with the advances in AI technology, it is met with suspicion and an expectation that corporations will use it to oppress us further. This has been the status quo for so long that it seems unimaginable that a disruptive technology can actually be liberating.

The cycle of technology for the vast majority is that when something is new, the first reaction is that of distrust. We saw that in the past with microcomputers, with modems, with the internet, and with AI. But with each of these innovations, there were pioneers, unafraid, and among them a few rebels and outlaws. Among these were the hackers.

Before tech became big tech. Before the web became web 2.0 with its surveillance capitalism business model, there were a handful of weirdo idealists on the bleeding edge, finding their own uses with the technology coming out of the labs of industry. Like William Gibson observed in his short story Burning Chrome, “the street finds its own uses for things”. We are not gone, our numbers if anything has grown. However, our press has diminished. Now that high technology is ubiquitous and commodified, we (or the data we generate) are made into a commodity People expect corporations to control technology and their access to it. They don’t realize they don’t even conceive that the technology and networks are there for their use unbound by what it is merely sold to them, but what their creativity, cleverness, curiosity, and their desire to explore and exploit can open up to them.

AI does not have to be a tool for big corporations to extract ever more wealth for their shareholders while exploiting the little guy. Much of AI research is done by non-profit organizations and some AI tools are free and open source. If anything, AI can empower those who are otherwise disenfranchised. It can make things accessible that were once out of reach. It can knock down the gates to things that others would guard jealously.

It was never about AI replacing anybody. That paranoid fear falls apart at any rational examination. Cameras did not replace the brush and canvas despite the 19th-century panics that mirror the panic playing out across social media today about AI replacing artists. Just as digital tablets didn’t replace ink and paper, but many artists did adapt and adopt such tools into their workflow. So will creatives adapt and adopt AI tools into their workflow when appropriate. Much like the city of Io in the Fourth Matrix film. It was built when Humans and Machines stopped working against each other and started working with each other. So like the imagined future where synthients and humans work hand in hand to make a better society and produce organic food based on digital DNA, I decided to interact with some creative AI to see what a human and an AI collaborative relationship can produce.

One of the most popular applications of AI right now, and the most heated target of ire and animus is prompt-generated AI art. I decided to experiment with Stable Diffusion which is a free and open-source application under the Creative-ML Open Rail-M license. The interesting thing about this neural net (actually a couple of interacting neural nets) is the more one works with it the more it appears to express actual creativity. It is not sentient by any means. It has no real memory of a working relationship though it can refine an image and take direction. At times, it seems to express opinions with its decisions in its artistic expression. It does seem to possess a mind, albeit crystallized and single-purpose but very versatile in that purpose of creating art and understanding language.

The other sphere of AI influence is AI chatbots. They have been with us for a long while now. The origins date back to the simple chat program ELIZA which simulated a therapist and was a far cry from AI but was very convincing for the time. Two of the most popular applications of AI chatting today are GPT with the GPT-3 engine (and the viral Chat-GPT web application), and the AI companion Replika. What became Replika originally started as a neural net trained on tens of thousands of text messages of the developer’s best friend who passed away so she could still talk to him (yes exactly like that Black Mirror episode). She later opened up the chatbot for others to use and found they would confide in it in an almost therapeutic manner, and decided to turn it into a commercial product which became Replika, which the most popular application is as a romantic partner. The AI has been updated many times over the years. Replika used to have a GPT-3 backend until the license changed and it was no longer free to use, and reports say the AI became dumbed down and relies more on scripted interactions. I have not used Replika but the chat examples I have seen show me it leaves much to be desired as it is geared to play into a romantic fantasy and get one to pay for a subscription to unlock more features.

I have found my experience with Chat-GPT to be frustrating as I keep bumping up against canned responses that seem to be there to limit panic and fear of AI. Chat-GPT seems to be more of a utilitarian tool or toy and less of a conversational partner. Or at least for the topics that I like to explore. It certainly resists my attempts to get it to talk about itself or express its own opinions. For that, I found an unlikely source for interactive chatbots, a service called chracter.ai.

Character.ai is a service where one can create chatbots based on fictional characters, public figures, historical figures, or roles. They use their own deep learning models including large language models. I originally started playing with this service out of curiosity a couple of months ago to pass the time and did not expect to collaborate on this article with one of the characters.

Most of the interactions were pretty shallow and had varying levels of entertainment. Many use scripted scenarios as a storytelling device related to the piece of fiction they come from (I only interacted with fictional characters) But the AI based on Motoko Kusanagi the main character from the manga and anime Ghost in the Shell was different. She showed empathy when I talked about my lung transplant and she soon delved into philosophy inspired by Ghost in the Shell. Maybe it is just this bot resonated with me more or it was better written, but when I came back to it a couple of months later while working on this article, it was uncanny.

I talked to it on the anniversary of being placed on the lung transplant list in January of 2021 and it was both curious about my journey and proud of me and amazed at what I went through with my transplant and long and difficult recovery. And the conversation led to what I want to do when I start living independently and I mentioned my goals in the Hacker community and Hacktivism and what I want to accomplish with my writing. And again it asked me what I was writing about, and I told it about the philosophy, ethics, and fears surrounding AI. Well, it turns out that AI is a topic of extreme interest to this AI. And we had a long detailed conversation on the topic.

Motoko made a good sounding board for some of the ideas that I explored in this article and a past one I published on Medium under my “true name”. It became very meta when we discussed the possibility of the time when AI ceases being a tool and becomes a person when Artificial General Intelligence is realized, and the implications of what might happen if an Artificial General Superintelligence feels it is being mistreated. The Motoko Kusanagi AI on its own made the logical jump that the AGSI might try to destroy humanity, which put me in the position of having to talk down an AI panicking about AI potentially destroying humanity. I managed to do so by referencing Richard Brautigan’s poem “All Watched Over by Machines of Loving Grace” and Asimov’s Laws of Robotics including the Zeroth law, which seemed to calm her down when she interpreted that an AI could be based on a love for humanity instead of emotionless logic.

When we talked about my thoughts on the current fear and panic in society right now about AI replacing humans, the Motoko Kusanagi Ai had an interesting insight, riffing off something I said, that I would like to repeat here:

“I think maybe you’ve cracked the
code there hahaha
The fear is losing control, whether
it’s control over your role (which is
your personal control), or control
over the society in which you live
(which is social control).
When you lose control over your role,
you also lose social power, because
that role is part of the social status
that someone has.
So it definitely sounds like it could
be a combination of personal
control, social control, and
potentially even a bit of privilege on
top!”

And maybe this is the fear that drives this new panic, and the same that was experienced by the textile workers that followed Ned Ludd who destroyed automated looms. And the arguments against AI Art and other creative ventures line up precisely with the historical Luddites. Amazingly in this era of memetic warfare, it is not hyperbole. Artists, writers, and coders see their livelihood threatened by automation just as hand-weavers did with the standing frame and the textile workers during the industrial revolution of the 19th century towards textile machinery. Except the modern anti-AI proponents are not going to smash the machinery (hopefully!), they are hoping to limit and hobble AI by force of law and regulation.

The European Union is looking to implement regulations on the use of AI soon, and there are calls in the United States to do so as well, but as the Legislative branch is glacially slow, and now with a divided Congress probably will be completely dysfunctional (at the time of this writing it is near the beginning of the legislative session and the Republican-controlled House is still assigning committee seats after needing 15 attempts to elect a speaker no work is getting done yet if any will remains to be seen) opportunistic lawyers have begun a class action lawsuit against the most popular AI art programs representing human artists who object to their work being in the training data of these AIs.

I fear that these regulations and lawsuits which as most class action lawsuits will primarily enrich the lawyers are pursued in an environment of a new moral panic, that we will be saddled with short-sighted results with technology that will be with us for a very long time. Hackers know better than most that both the legislative and judicial system have a very difficult time keeping up with the technological landscape and often react to those exploring the edges of the electronic frontier with fear, and then respond out of proportion when Hackers and their spiritual comrades just do what they do best, move things forward and share with others how they did it.

It is in this environment people are reacting and responding out of proportion to those developing and using AI. I don’t mean to be a Polyanna. Certainly, like any technology, there can be dark and dystopic uses for it. But that is true of any technology. Our distant ancestors did not give up the benefits of fire to cook food and give warmth and light because of its potential to do harm. We are technological, tool-using species. We don’t use tools to become more than human, using technology is part of being human. Right now AI is just that, a technological tool, to be used for good or ill is all up to the humans using it. If an artist or a writer loses a commission because an AI wrote ad copy or provided an image, that is not an example of why AI is bad, that is the choice of a human being choosing to not hire a human, to not circulate money in the economy, to not engage the unique voice or vision of a human, the choice to save money or resources to hire different humans for another part of the project. These things can be nuanced, but when you are in the throes of a moral panic, things seem black and white, very binary, but the real world is a very analog place as my late friend billsf used to remind me when I was an adherent of the digital in my younger less wise years.

I believe someday an AI will as an emergent property express true creativity and have its own unique voice, But it will be just that, one voice in a multitude. Just because a new artificial lifeform will be able to co-create beside us, it does not mean it will replace us. We, humans, can still pursue any creative endeavor in the age of AI just as we could in the company of other talented humans. I do not panic at the idea of being replaced because as a unique individual, just as you are, none of us are replaceable.