William Hertling is the author of Avogadro Corp, A.I. Apocalypse, The Last Firewall, The Turing Exception, and the upcoming Kill Process. These near-term science-fiction novels explore the emergence of artificial intelligence (AI), the coexistence of humans and smart machines, and the impact of social reputation, technological unemployment, and other near-future issues. His novels have been called “frighteningly plausible,” “tremendous,” and “must-read.”
Hertling’s Singularity Series novels have been endorsed by and received wide attention from tech luminaries including Harper Reed (CTO for the Obama Campaign), Ben Huh (CEO Cheezburger), and Chris Anderson (CEO 3DRobotics, former Editor-in-Chief Wired).
His first novel for children, The Case of the Wilted Broccoli, was published in 2014.
Hertling grew up a digital native in the early days of bulletin board systems. His first experiences with net culture occurred when he wired seven phone lines into the back of his Apple IIe and hosted an online chat system.
A frequent speaker on the future of technology, science fiction, and indie publishing, Hertling has spoken at SXSW Interactive, Defrag, OryCon, University of Colorado, Willamette Writers Conference, and many other conferences.
DC: Will, thanks so very much for doing this interview. Did you start off wanting to become a writer, or did you stumble into it?
WH: I very much stumbled into it, although, in retrospect, there were a few hints ahead of time. In college I helped write and publish a set of computer manuals. I started blogging in 2003, and also wrote a magazine article that year. In 2007 or 2008, I learned about NaNoWriMo, and started a non-fiction book about the business use of social media. I abandoned that project about 35,000 words in, when I realized just how difficult non-fiction writing is.
Then in 2009 or so, I read two books back to back, Accelerando by Charles Stross and The Future is Near by Ray Kurzweil, that set my mind abuzz with thoughts of the technological singularity, the point where AI exceeds human intelligence. I noticed a gap in science fiction novels: some assumed strong AI existed, and others ignored the singularity entirely, but very few deeply explored the point of emergence and its impact on humanity.
I had the idea for Avogadro Corp over lunch one day, and daydreamed about it for six months. I took the month of December off work and wrote the entire first draft.
DC: Your four-novel Singularity Series, which began in 2011 with Avogadro Corp and concluded last year with The Turing Exception, is a deep dive and a wholly fresh perspective on the so-called technological singularity. The books in this series have sold 75,000 copies and racked up over 1,300 reader reviews with a 4.5-star average, putting you in the front ranks of success for a self-published indie author. How did you crack the tough nut of marketing and reaching visibility in a crowded marketplace?
WH: I reached out to friends and family, letting them know by any means possible that I’d published: email, Facebook, and Twitter. For these people, it was not so much selling them on the strength of the book, but conveying the excitement represented by this milestone in my life. Many people want to be supportive, but don’t know what an author needs, so I asked specifically for people to buy the book, tell friends, and post reviews.
Learning from those early experiences, I refined my website, book description, and how I asked for help. Then I reached out to more distant connections and potential influencers (other bloggers, for example). I created business cards, and handed these out at conferences. At this point I was selling 1-3 copies a day, maybe 150 books in total.
One of the most important elements of my marketing was using very finely tuned Facebook ads to reach fans of niche authors I was similar to in writing style and topic. I experimented with variations of text, images, Facebook targets, pricing, and landing pages until I finally hit on a few mixes that sold books at a profit. These ads sold an extra 5-8 copies a day, and I reached about 500 books in total.
Every success involves elements of luck and timing. But just as you can, for example, maximize the likelihood of meeting a movie star by moving to Los Angeles, you can also increase the odds of serendipity. This early phase of marketing, where you’re trying to push out a few copies a day, is mostly about maximizing the chance of acquiring a reader who is also a significant influencer.
In my case, that significant influencer turned out to be Brad Feld, a well known and highly regarded venture capitalist, who happened upon my book and blogged about it, letting a large number of about it. Soon afterwards, I was selling thousands of copies a month.
Since then I’ve continued marketing through newsletters, blogging, speaking at conferences, and experimenting with occasional ads on Bookbub and elsewhere.
DC: This series is high-intensity, core Science Fiction. It’s highly original, packed to bursting with ideas, and cracks along at a ferocious pace. But despite the series’ huge success, very few SF readers know your work, and most of your readers are people working in the tech sector. Why is that?
WH: I tried several marketing approaches that failed, including sending books to newspapers and soliciting reviews from mainstream science fiction reviewers. Both of these suffer from difficult competition because everyone wants to be reviewed there, so the actual chances of getting reviewed are quite low. Even if you do manage the occasional review, readers of the publication are inundated with daily book recommendations, so few purchase any given book.
When Brad Feld wrote about my novel, which led to other venture capitalists, CTOs, and CEOs of tech startups reading and talking about the book, I asked myself what these people had in common. It took a solid month of deep thinking before I realized the common thread was a deep interest in technology, especially where tech in going in the future.
So although my books are science fiction, and even more specifically science fiction about AI, I think of them really as exploring the theme of future technology’s impact on people and culture, whether that is AI or anything else. Once I had this realization, it helped me solidify my marketing. For example, I reached out to Brad Feld and offered him a guest post on my techniques to predict the future. The result, How to Predict the Future1, reached hundreds of thousands of people, and was the number one Google search result for that term for quite a while.
By focusing my marketing around on the themes of my writing, rather than the genre or specific topic, I’m tapping into a very different conduit to reach readers. That my book happens to be science fiction is somewhat besides the point –conceivably I could write about the same themes in a non-fiction book, and my readers would still be interested. In addition, since the influencers in this group aren’t out there recommending books every day of the week like a book review blog does, when they do make a book recommendation, it stands out, and more people buy it.
DC: A year ago, a number of leading figures in the tech and scientific community, including Elon Musk and Stephen Hawking, publicly sounded alarm bells over the rush to develop strong AI, and suggested that we might be building something more dangerous than nuclear weapons. Where do you stand on this?
WH: The potential for danger is definitely there, although Terminator-like doomsday scenarios are at the bottom of my worry list.
At the top of my concerns is that AI becomes increasingly in control of the infrastructure of the planet, such that the impact of a widespread technology failure becomes much more significant. As time goes by, civilization becomes more technology dependent. For example, we can’t maintain the current standard of living for the current population based on 1950s technology, because the older technology is not efficient enough. Project twenty years into the future: if we’re dependent on AI to manage all our infrastructure to maintain a given standard of living for the population, and then we have a catastrophic failure of AI for whatever reason, we’ll be plummeted into darkness – quite literally.
Also worrisome is the scenario where AI becomes vastly more intelligent than us and decides the best way to keep us in check is to manipulate us. We’re already so vulnerable to manipulation by the media. Imagine how much more vulnerable we’d be if every communication is AI-mediated and altered. Look at Facebook’s experiment of how altering what stories were in a person’s feed affected their happiness. Very subtle stuff leads to significant impacts.
At the same time, there is potential for greatness from AI. The promise of nanobots for human health and longevity, custom DNA tweaks, and many other ultra-high-tech promises, including greater resource and energy efficiency leading to a sustainably-managed planet, are much more likely to be developed if we have strong AI here to assist us. So we can’t turn our backs on it either.
The problem is that, unlike nuclear weapons which we’ve succeeded in restricting to governments, strong AI will be accessible to anyone. Even if 99% of AI use is beneficial, it will take only one disgruntled hacker operating in their basement to build a malevolent AI. Look at the recent Microsoft AI chatbot that was unleased, where, within 24 hours, the community had managed to get it to spout racist propaganda supporting Donald Trump and Hitler.
DC: In your first, amazingly prescient book, begun in 2009, you posit the accidental emergence of strong AI from a language optimization program called ELOPe which was created to improve email. In the last few months, both Google and now FoxType have launched software to help users optimize their email. Given the current state of AI research and the hardware available and under development, do you believe strong, self-bootstrapping AI is a real possibility in, oh, the next decade, or even at all?
WH: I think it’s possible, although not particularly likely in the next ten years. Ray Kurzweil is well-known for his projections of when we’ll see AI which compare the processing power available in computer chips with the power necessary to simulate the complexity of the human brain.
I used his calculations as a starting point, and did my own comparing a wider range of input values. What I found is a variety of scenarios that depend on three key dimensions.
One dimension is complexity of the human brain. At one end of the spectrum is the assumption we can implement intelligence more efficiently than nature, and at the other end of the spectrum, that we can’t understand intelligence at all, but rely on brute force simulation of nerve cells. My perspective is that we’re not going to be more efficient than nature, at least, at first, so we’re looking at the more complex, brute force scenario.
The second dimension is processing power. One end of the processing power spectrum concerns itself with what an individual has available to them in their home, while the other end of the spectrum takes advantage of massive parallel computing power available at organizations with Google-like resources.
Finally, you have the dimension of time, and the increasing processing power available to us as chips get faster. (Aside: For those concerned about the end of Moore’s law, do a quick calculation of the total personal computing power an individual has, rather than that residing in a single processor chip, and you’ll notice the total is still increasing on the same curve it was before. It’s just distributed among more devices now.)
Plotting these three values, and looking at the extremes, we end up with an Avogadro Corp-like scenario around 2015, where all the computing resources of a big company are brought to bear on a single AI, and at the other end, a hobbyist implementing an AI around 2045 on the computing power available to them personally. I wrote an essay for IEEE several years ago about why I think widespread involvement tends to accelerate technological progress2, like it did for recommendation engines with the Netflix Prize, so I’m again biased toward seeing faster AI development when the necessary computing power becomes available to the common person.
In sum, my two biases (believing we are unlikely to be more efficient than nature, and we need widespread involvement) make me think we’ll see the first true, strong general purpose AI sometime after 2030, but certainly by 2045.
DC: Your Singularity Series looks hard at the challenges of having biological humans and transhuman AI sharing a planet. If we arrive at strong AI by building machines that think and explore ideas and refine outcomes in the organic way humans do, using neural net and deep learning approaches as opposed to simple, linear software, do you think it inevitable that AI notions of ethics and morality will range across the spectrum from “good” to “evil”, just like our own? Or are we anthropomorphizing?
WH: If AI has free will and the ability to affect the world, it must embody some concept of ethical behavior. Even no consideration for ethical behavior is a form of ethics.
The trolley problem3 is a classical thought experiment in ethics. Pose the problem to different people, and you get different answers to what is the “right” behavior. There are hundreds of variations on that one problem alone, each representing more nuanced ethical considerations. And that’s an exercise in ethics that doesn’t take into account the messiness of real life.
You can look at the current state of American politics to see that two groups, each behaving ethically according to their own standards, thinks the other group is not merely unethical, but actually evil.
Since we humans have no one definition of ethical behavior, we certainly can’t expect AI to behave according to some absolute scale. Whatever ethics are designed into the AI by their human creators, we can be sure that some people will consider them good and some evil.
On the other hand, over time, AI may converge on a single definition of ethical behavior over time more readily than humans do, because I suspect they are more likely to rely on utilitarianism as a guiding principle.
DC: So, blue pill or red pill?
WH: Oh, red pill definitely. I can’t tolerate the notion of having reality hidden from me. One of the most terrifying scenes I’ve ever read in science fiction was from The Unincorporated Man, in which you get to see what happens when the entire human race takes the blue pill. I still have concerns over what happens when fully realistic, immersive virtual reality is created.
DC: When I was a teen, back in the late sixties and early seventies, it was widely predicted that increasing automation would quickly bring about a “leisure revolution”, and there was a great deal of concern about how people would adapt to working much less and having lots of free time. Of course, nothing remotely like that has happened and we’re all working much harder and seem to have far less leisure time than ever. What went wrong?
WH: As someone who is juggling parenthood, a day job in tech, and writing, it’s a little hard to get perspective on this question. I work hard partly because I must, and partly because I enjoy what I do. Leaving my personal situation aside, I think there are two trends that together create the current environment. One caveat: My answer is probably very US centric.
First, we have the rich taking a larger and larger percentage of the pie. For the majority of people, there’s no option but to work harder and longer because they aren’t getting paid a living wage for the work they’re doing.
Second, we have a higher standard of living including basic expectations that didn’t exist in any form in the sixties and seventies. We didn’t have cable, computers, or Internet, eating out regularly, prepared foods, or even one car per household, let alone person.
The default behavior for most people is to want it all, both materially and experientially, which crowds out any opportunity for true leisure.
On the other hand, I’ve seen people turn their back on our modern, consumer-oriented, entertainment-focused culture and live a much more basic lifestyle, and by doing so, they’re able to work part-time or live off savings. This voluntary simplicity requires a conscious, ongoing choice in a society that encourages consumption. It should be noted that this choice is a privilege of those making at least a reasonable income, although I’ve seen people at all levels of income, including part-time, minimal wage workers, make decisions that prioritize financial independence over more stuff.
In sum, I’d say the majority of people tend to prioritize material acquisitions and buying leisure experiences, like eating out, over actual leisure, like enjoying a home cooked meal with friends. Still, this choice is a privilege afforded to increasingly fewer people because of the ever greater diversion of wealth to the ultra-rich.
DC: As well as having three young children, you work fulltime in the tech industry and are a productive and successful author. What do you do for leisure, Will?
WH: Every writer juggles at least two different roles: the creative side of the house where they write new material, and the business side of the house, which is interrupt and deadline driven. Indie authors spend even more time taking care of the publishing side of things. One of the worst feelings is when I have a precious day free to for creative writing, and I end up burning it all taking care of overdue business tasks.
So doing the actual creative writing is one of the things that feels most like leisure to me, because I get so much enjoyment from it.
I’m also fairly delighted to be able to take a walk while listening to music, whether that’s an urban exploration or a nature hike. I also love connecting with the writing community in Portland in person. We have so many interesting people on all different stages in their writing careers with different objectives. It’s fun to get to know folks and celebrate their victories with them.
Other interests are on temporary hiatus, probably until I’m able to leave my day job. Some of these are live music, RC planes, and more involved video games. I’d also love to play with robotics.
DC: Are you a gamer? If so, which games do you enjoy?
WH: On and off. It depends on where I am in my writing, and how much else I have going on.
My favorite game of the last few years is Kerbal Space Program, which an insanely epic space simulation in which you get to build rockets and explore the solar system. I play in creative mode, set different missions for myself, and have just one rule: No Kerbal left behind. One specific Kerbal that’s my favorite has visited every body in the solar system.
I play with a life support mod that means the Kerbals will die if I don’t replenish their air, food, and water. I found myself playing what amounted to my own version of The Martian when a mission to Eve went awry, and rescue mission after rescue mission failed or led to more problems. Two Kerbals “volunteered” to leave the ship and walk off into the distance so as to leave enough life support supplies for the last Kerbal to live. I post journal entries to an online writing community that read like fan fiction short stories.
DC: What do you read? Any favourite authors?
WH: Cory Doctorow is my favorite author by far, and there’s no greater delight than getting to read one of his new novels. I read mostly science fiction, although I also have a sweet spot for thrillers like the John Rain novels by Barry Eisler. I reread the classics of 1980s cyberpunk frequently. One of my favorite books of that era that’s often overlooked is Walter Jon Williams’s Hardwired which has left me with visions of armored hovercars for decades.
Reading is unfortunately one of those things that’s taken a hit due to the time I spend writing, and probably also due to time spent on the Internet. I read some research a while back that demonstrated we train our mind to a certain length attention span. By doing so much short-form interaction on the web, we’re reinforcing the pattern of paying attention for shorter and shorter periods of time. This makes it really hard to sit down and read a novel.
One of my goals for 2016 is to spend more time reading. I just finished The Handmaid’s Tale by Maragaret Atwood, which is a very powerful, devastating novel. In light of the current Presidential candidates, I found it terrifying to read. Right now I’m reading Flatland.
DC: Tell us a little about The Case of the Wilted Broccoli.
WH: My kids begged me to write something they could read. As a kid, I had a particular fondness for detective novels like The Hardy Boys and Encyclopedia Brown. I especially enjoyed novels in which the kids did everything and adults played only a minor role. So I knew these would all be elements of whatever I would write.
Then, a few years ago, the third element hit me when I was at my first Cory Doctorow appearance. Although adults probably form the bulk of his readers, Cory gears his books towards teen readers as well, and every one of his novels is an education on principles of technology, government, privacy, and power. At the event, there were several young teenagers in attendance, and several asked questions during the Q&A after his talk. It was really moving to see these people who had clearly been affected by his writing.
That made me realize that if I was going to write a novel for kids, I wanted technology to feature prominently in it, and the kids fully empowered as technology creators, not just users. So a subplot woven throughout is around the school science fair, and the kids use their project, a homebuilt drone to help solve a mystery.
DC: And your favourite food or meal is…?
WH: I’m especially fond of izakaya, which is Japanese bar food. I tend to put whatever food or drinks I’m passionate about at the time into my writing. If I happen to revisit an older book I wrote, it’s fun to remember oh yeah, this was when I was having a martini phase, or here’s where I started drinking whiskey.
DC: Although—or perhaps because—you work in tech, some events in your series, especially in book IV, The Turing Exception, suggest strong sympathies, even a yearning, for a simpler, back-to-the-land, communitarian lifestyle. Would you like to live in a simpler world?
WH: Around the turn of the millennium, I had an intense interest in environmentalism, especially the role of individual choice in our lifestyles, which was partly motivated by a series of fantastic discussion courses from the Northwest Earth Institute4. I was mostly vegan for a while, sold my car, reduced the amount of technology around me, and spent a lot of time with people looking for an escape from consumerism. I had several brief but amazing encounters with intentional communities.
Although I’m very attracted to all of that, I also can’t ignore the part of me that’s passionate about technology, the web, and online communities. At first glance, it’s difficult to embrace both perspectives. Many in the voluntary simplicity and intentional community movements want to minimize the role of technology in their lives, while many in the tech community embrace it whole-heartedly without any thought of what makes sense to bring into their lives.
I’d love to figure out the middle ground. I don’t think simplicity has to mean living in a cabin in the woods, although I see the appeal of that. It can mean being selective about what we choose to have in our lives.
Another possible model of embracing both comes from one of my favorite people, Gifford Pinchot III, who is a cofounder of Pinchot University, a sustainable business MBA program. When I first met him I noticed he had no leisure time whatsoever and worked every minute of the day. I asked him how he sustained that pace. His answer was that he worked very hard for ten months a year, and then spent two months a year living off-the-grid on a nature preserve in Canada, chopping wood, hiking, and drumming.
DC: How do we get there from here?
WH: I’m not sure. I have some hints of things I suspect are important.
Everyone needs some exposure to voluntary simplicity or intentional community. Even if they ultimately choose not to live that lifestyle, just being aware of it as an option, and having a vocabulary to be able to talk about it is important. Most people don’t even know that they have, by default, taken the blue pill. They’re in the matrix as defined by popular culture.
We also need to show up to anything we do as our full, authentic selves. Too often we go to work, where we spend the majority of our functional hours with other people, and we only permit ourselves to engage on a very superficial, very safe level. Which means we end up spending most of our lives having very superficial and safe conversations.
But you don’t get any meaningful change or connection at that level. You have to be willing to be vulnerable, to show when you are afraid, to risk crying with someone or hugging them. One of the biggest travesties is the way work culture, by keeping everything “safe,” robs us of the opportunity for deep connection and meaningful engagement. I’d like to see people steal that back. We have to risk being hurt to also experience joy and love.
Tim Ferriss says many people keep themselves busy because they’re afraid of what happens if they suddenly have free time. They’re afraid of asking themselves if their life has meaning, if they know what they want to do with their lives, if they have the quality of relationships they want to have. It’s easier by far to stay busy and avoid those questions, and by all means, avoid making changes, which are scary.
Conversely, the more accustomed we become to addressing those issues, the less we fear them, because we eventually learn that usually things work out okay and we develop better skills for adapting to change. Then, from a place of less fear and greater competency, we can help the people around us go through their own life journeys.
You asked how we get to a life of greater simplicity, and my answer is we should all get to the life we want to live, whether that is simple or not, so long as we have the opportunity to be ourselves, to have meaningful relationships, and to do the important things we want to do in the world. Simplicity is one way to approach that, but it may not be for everyone.
DC: You’ve just completed a new novel, Kill Process, a tech thriller with a female protagonist, due to release in the coming months; I’ve read it, and it totally rocks. Can you talk about it a little to whet readers’ appetites?
WH: Angie Benenati, formerly a teenage computer hacker in the 1980s, is now a data analyst for the world’s largest social media company, Tomo. Struggling to cope with the aftermath of an abusive relationship she escaped five years earlier, she uses her access to everyone’s data to profile domestic abusers and kill the worst of them to free their victims.
This uneasy status quo is disrupted when she realizes that Tomo is, in effect, holding users’ social relationships hostage while systematically violating their privacy and control over their own data. Seeing too many parallels to the world of domestic violence, Angie decides she must eliminate Tomo by creating a new social network that ensures such a one-sided power dynamic can never occur again.
It’s a contemporary thriller with a blend of the startup world and computer hacking exploring themes of data privacy and ownership. The themes I explore stem from my interest in where power resides between people and companies, especially when the companies involved mediate our interpersonal relationships.
DC: Will, thanks so much. You’ve been a great guest and I really appreciate you taking this time with us. Is there anything else you’d like to add?
WH: It’s been a pleasure for me, as well. Thanks so much. For anyone who has enjoyed any of what I’ve said, please check out my books or sign up for my monthly mailing list, especially if you’d like to find out when Kill Process is available.
Notes
1 How to Predict the Future
2 Why I think widespread involvement tends to accelerate technological progress (IEEE essay)
3 The trolley problem
4 Northwest Earth Institute
Did you enjoy this interview with William? Let us know with a comment!
This concludes my Under the Covers interview series. Links to all the Under the Covers interviews are here
Please sign up for RSS feed at the left to follow this blog and enjoy future interviews and op-ed. Thank you!