Tuesday, August 30, 2011

The Science Fiction Museum and Hall of Fame: Avatar and Battlestar Galactica exhibits

This past Saturday, I took the family down to one of my favorite places, the Science Fiction Museum and Hall of Fame.  Located in the Experience Music Project, a wildly colorful and uniquely designed building, it sits in the shadow of the Space Needle in downtown Seattle.


© Jonathan Dalar

The building was originally built for the Experience Music Project, which was a way for creator Paul Allen to bring something to the local community that wasn't available anywhere else.  His vision was to engage people, inspire them, and get them excited about music, not just show them artifacts in a museum.  He expanded this concept with the addition of the Science Fiction Museum and Hall of Fame in 2004, bringing us exposure to artifacts and unique memorabilia we otherwise wouldn't have been able to see.


© Jonathan Dalar

The museum has a very large number of artifacts, but shows very few at a time.  Not only is this done to preserve the artifacts, so as to not display them for a long time, it's also to continue to engage a large membership and refresh exhibits with a variety of content.  After all, who wants to go see the same old thing all the time?

The Science Fiction Hall of Fame is currently closed, as it is being renovated along with the rest of the downstairs in preparation for the new horror exhibit opening soon.  That was a disappointment to me, because as a writer of science fiction, it's always an awesome experience to walk along the row of glass-etched tributes to the genre's greatest writers.  Authors like Philip K. Dick, H. G. Wells, Ray Bradbury, Issac Asimov, William Gibson and many others in that Hall of Fame have been a great inspiration.  It opens back up next summer, so I guess I'll just have to wait and visit again then.

The displays exhibited currently are James Cameron's Avatar, and Battlestar Galactica, as well as the Jimi Hendrix and Nirvana exhibits on the EMP side, which I personally found equally fascinating, but a little off topic for the blog.  Avatar runs through September 3, 2011, and Battlestar Galactica through March 4, 2012.



Avatar

To start this part of the discussion, here are Director James Cameron and Actor Giovanni Ribisi talking about the project at the opening of the exhibit in a Seattle Times news video:




Before you even get inside the Avatar exhibit, you're met with a very cool interactive experience.  Floating jellyfish-like creatures from the movie waft around on a large screen at the entrance, and react to the shadow you cast on the screen.  If you stand still long enough, they will land on your hand.


© Jonathan Dalar

Once inside, you find a wonderful array of interactive exhibits, such as one that allows you to digitally design your own plants that might be found on Pandora.


© Jonathan Dalar

Further into the exhibit are memorabilia from the movie such as handmade models of the Na'vi characters...


© Jonathan Dalar

...and the full-sized Armor Mobility Platform (AMP) suit.


© Jonathan Dalar

Definitely the exhibit that garners the most attention, however, was the interactive motion capture 3D studio, where you can create a clip of yourself, digitally reconstructed to one of two scenes in the 3D Avatar world.  It essentially enables you to star in your own 30-second Avatar movie clip...


© Jonathan Dalar

...which of course, I did.  The clip instantly uploads to YouTube, where you can have it e-mailed to yourself and view it later.  This was my experience.




I'm evidently not quite ready for Hollywood, but it was certainly entertaining and educational.



Battlestar Galactica

This exhibit was my son's favorite part of the trip.  He plays the online game, and found it fascinating to see what he was playing "in real life".  The only downside for him was worrying about spoiling the ending for himself, which I found quite amusing.

The exhibit features three full-size prop spaceships, probably the coolest part, in my opinion.  They're all scuffed up, and look like they've seen their share of battle.

The Viper Mk. II:


© Jonathan Dalar

The Viper Mk. VII:


© Jonathan Dalar

And the Cyclon Raider:


© Jonathan Dalar

In addition to the ships, there are a large number of costumes and other props, as well as interactive exhibits that focus on the shows' concepts and conflicts.


© Jonathan Dalar
© Jonathan Dalar

























As an author, another interesting part was the genesis and time lines of the two shows (1978 and 2003).  I find myself drawn to the behind-the-scenes work that goes into a project, and the decisions that help shape them into their final versions.  Comparing the two shows allows a glimpse into that, or at least it does for me.



Coming in October

And now for a sneak peek at the new horror exhibit, Can't Look Away, opening for members on October 1, 2011, and to the general public on October 2.

EMP's senior curator Jacob McMurray has been putting together the new downstairs horror exhibit for more than a year now, and it sounds like a lot of chilling fun!  He's been working with directors Eli Roth (Super 8), John Landis (Animal House, An American Werewolf in London), and Roger Corman (Little Shop of Horrors, Death Race 2000), to curate a selection of films to serve as the launch pad for the exhibition.  The goal is to cover a wide range of the genre, looking at different generations and different kinds of films.  They'll examine the psychological aspects of it, looking at why we as humans are so fascinated with horror, even though it scares us.

Interactive exhibits will include a scream booth, where you can go into a soundproof booth, watch a horror film clip, where hopefully you'll scream on cue.  Your scream will be captured on film and shown on the screen outside the booth.  Is it bad that the first thought that came to me about this exhibit is that it will probably appeal most to the husbands and boyfriends in the crowd?

There will also be a shadow monster interactive, where your shadow is captured and digitized, allowing you to turn different body parts into strange grotesques and create monsters from your own human form.  I can see my kids spending a while at this part of the exhibit.

For the behind-the-scene geeks in the crowd, there will be exhibits on sound in horror movies, focusing on sound effects and music, and how they are layered together to create the desired effect in the movie.  Again, the geek in me comes out.  I'm really looking forward to this one.

In addition to the interactive areas, the exhibit will house a number of iconic artifacts, including one of Freddy Krueger's gloves, one of the Jason Voorhees masks from Friday the 13th, and Jack Torrance's axe from Stephen King's The Shining.  Perhaps the most exciting artifact is an original manuscript from Bram Stoker's Dracula.  Seeing those alone will be worth the price of admission.  I know I'll be one of the first visitors there when it opens.



© Jonathan Dalar

Finally I'd like to extend a special thank you to PR Director Anita Woo and the rest of the staff at the EMP.  They were very knowledgeable, and more importantly, were willing to take the time to answer all of my geeky questions.  For directions to the EMP, tickets and further information, you can visit the EMP's Website.  You can also find them on Facebook and Twitter.

Thursday, August 25, 2011

The Hive Mind

Hive minds are an integral part of many notable science fiction works.  It's a fascinating concept.  All the minds in a society, commune or other such community all linked together, working together, sharing the same information.  Gives a whole new meaning to the term "on the same page".

Most of the hive mind examples I've seen involve biology rather than science, however.  And that, dependent on exact circumstances of course, is usually closer to fantasy than science fiction.  The two are closely related, but fantasy usually involves elements of magic or the supernatural to explain things we don't see in reality.

This post really came about from pondering one of my last posts, Crossing the Uncanny Valley, and continuing that thought in light of my current work in progress, The Plexus.  With a virtual world connected with the physical world on such a personal, instantaneous level, add androids, and it seems you'd have the perfect setup for a hive mind.

Think about it for a moment.  You have a global virtual world, connecting communications, information, social interaction, entertainment and whatever else via instantaneous wireless connection.  You have androids, with built-in brains, wired into the network.  Bingo!  Hive mind.

But how would they work?  Many have argued that the hive mind causes its bearer to lose identity, to become nothing more than a drone in an insect-like society.  They argue the bearer becomes simply a tool to carry out whatever higher purpose is instilled, by an arguably non-hive-minded entity, on the hive mind.

I disagree.  I think not only would those connected this way have complete identities, but would be allowed to operate almost completely independently of each other, connecting only in terms of data transfer and information sharing.

Think about it.  How is technology moving now?  What are the current trends?  The internet is no longer a fad, but a way of life.  Cloud technology allows us to tap into resources beyond our immediate control or ownership.  Everything is moving toward a hive mind mentality already, whether we know it or not.

So where does that leave our androids?  In good shape, really.  Picture them similar to computers today.  They remain separate entities, have their own memories, computing capacity, subroutines, and profiles, but are connected to the whole to gain whatever information they need to access.

I picture them as completely autonomous entities, able to function all on their own.  They tap into the hive intelligence for any information, but remain separate as an identity.  They're essentially like humans, but connected by thought to instant worldwide information.  Real time.  Kind of scary if you think about it.

Actually the concept itself isn't very scary.  It's pretty cool from a strictly speculative point of view.  The possibilities of such entities are virtually endless, no pun intended.

But while the concept isn't scary, the real possibilities of it are.  This kind of technology is maybe a decade or two from actual existence.  Just as I've portrayed it.  We'll see this in our lifetimes, folks.  Real androids, almost completely indistinguishable from humans, with the full power of instant worldwide information and computing within a thought's distance.



They would be the technological equivalent of Star Trek's Vulcans, only almost omniscient.  Like if Spock and Data had a baby.  Aside from the disturbing visuals there, the concept is intriguing.  Instant decisions would be made from intricate analysis of data, and formed the most logical way possible.  They would always be a step ahead of you, always able to deduce a better method of doing something, a more logical step to a conclusion, and a more thoroughly thought-out process of deduction.  Couple that with scientific breakthroughs in medicine as it relates to the technology of robotics, and you'd have an almost unstoppable force.

Now I'm not saying they'd be some evil, unstoppable force bent on world domination like the Terminator, but they would hold a great deal of power.  They would be the equivalent of massive think tanks all on their own.  they would be far more employable in any number of fields than humans.  The effects on society based on implications from these facts alone are what makes this idea truly scary.

They're coming, folks.  They'll take over the world.  It just won't be as we've imagined it.

Monday, August 22, 2011

The Virtual Landscape of the Future

Technology expands exponentially.  Innovations seem to explode onto the scene overnight, and in seemingly no time at all, what was once the realm of science fiction is reality.  In fact, we're already living in a world that rivals the speculative worlds of great science fiction writers of the past, and there is no end in sight to the advances we'll see in nuclear studies, medicine, physics, space exploration, and other areas of scientific study.

Radio-frequency Identification (RFID) implants have been around since 1998.  We've seen them used commonly in pets.  They have even been implanted with success in humans.  In fact, they've been in the news recently in Mexico, where kidnappings are on the rise.  People are turning to them to aid in recovery efforts should they become a statistic in a crime that has jumped over 300% in the last five years.

While some of the claims made by companies selling these chips may or may not be valid, this does raise several interesting points of discussion.  Much has already been debated about the ethics of this type of technology.  There's the Big Brother aspect, and the potential for malicious theft of information, but there is also ability to more quickly and effectively save human lives, save money and time, and make aspects of our lives much easier and more convenient.  The technology is neutral, neither good nor bad.  And it is here to stay.

Now, scientists have further used the medical world to advance computer technology by creating intelligence chips based on the human brain.  This is important because it's a radical departure from the traditional von Neumann architecture we've used to create computing machines in the past.  This structure exponentially increases computing power, while keeping power dependency and size to a fraction of what is now necessary.

Quick Response (QR) Codes, first seen in 1994, have seen a recent explosion in their use and popularity.  More and more we are seeing them pop up, used in everything from extra content to advertising.  They've been called the future of marketing.  They're the natural evolutionary step of the barcode, but with a 7,000 digit capacity in two directions, instead of 20 digits in a single direction.



This technology, coupled with advances in wireless technology, and the need for smaller, faster, more easily accessible information has led me to the next jump in logic in my own speculative world.  In the future, we will be connected via ourselves to the virtual world.  That's right.  Devices will become a thing of the past.  No longer will we have a phone or computer connecting us to each other in the virtual world.  We'll do it through our own bodies.

All that's missing is the invention of an interface between the brain and this technology.  A neural connector chip, implanted by nanobot technology, would be able to form a connection between the brain and wireless transmission architecture to the outside world, thus linking our brains directly with the virtual world.  Once this is possible, how soon would we be able to implement a QR code-like interface where our very eyes would be the conduit sending information to this connector, which would then translate the code into usable format inside the brain.  We could simply look at the code to watch the content, free from any handheld scanning or computation device.

Couple this further with other sources of input, such as sonar technology, or other such conduits to new perceptual realities, and our virtual world would become so much a part of reality, in time it would become virtually impossible to live without.

Better yet, imagine this, but through your own eyes, and populated with whatever content you wanted it to show:


It would be like augmented reality eyewear, but without the eyewear.  Imagine the overlays from movies like the Terminator, or the technology of any number of recent science fiction movies, except for use in everyday situations.

Need to keep tabs on your kids in a crowded mall?  Turn on their identification feed to track them.


Need to find a place you've never been to?  Now you'll see it without seeing that little sign hidden in the corner of the window.


Looking for coffee?  Turn on the virtual connection to that type of service via the augmented reality stream.


Already most of the civilized world is connected via at least one form of social media, and many of us keep up with several.  Raise your hand if you've seen someone type that no one will hear from them for a while, as they are going on vacation, or will be away from the internet for a while.  How about someone mysteriously vanishing for a few days only to resurface later with a story of how their internet crashed and it took forever to get back online.  Yea, I thought so.  This sort of thing would allow us to more easily manage that sort of thing, and would allow us to interact without the need to sit down at the computer and type something out, or not being able to tweet something because we left our phone at home.

It's coming, folks.  In a world that revolves around portability, ease of use, and size, it's only a matter of time before innovation makes devices of any sort obsolete.  It sounds like a scary thought, but then, the technology of today would look quite scary to anyone even as recent as the 1960's.  Fifty years from now, the technology that surrounds us today will look to us then like the 1960's do to us today.  And with the exponential rate of advance, it may be a fraction of that time before it happens.  Personally, I can hardly wait.

Tuesday, August 16, 2011

Crossing the Uncanny Valley

As a writer of speculative fiction, the hypothesis of the uncanny valley is a very interesting topic.  It plays well into science fiction on a number of levels, as it delves into the human psyche and our emotional response to technology which has narrowed the gap toward humanity.

For those of you who haven't heard of the uncanny valley, it is from the fields of robotics and computer animation, and proposes there is a dip in the graph of human reaction from positive to negative, as a human replica approaches but not quite equals human likeness.  In layman's terms, the more something looks like a human, the more positively we react to it, right up until it looks almost but not quite human, in which case it now causes revulsion.

This graph shows that hypothesis:



Taking a look at the examples given, the graph appears to support the hypothesis.  Teddy bears are cute.  A zombie not so much.

But on the other hand, while zombies are much closer to human appearance, they signify many things that cause negative feelings.  They represent death, decay, a glimpse at a horrifying afterlife, all things which significantly impact our feelings toward them, no matter how cool and trendy they've become recently.

So let's look at something that doesn't represent those things.  Let's take a look at something actually designed to attract and cause positive feelings.

Let's meet an "actroid":


Now that's kind of freaky.  We're fully into the depths of revulsion that is the uncanny valley with her.  I don't know what kind of terrifying visions instantly raced through your head watching that, but if they're anything like mine, you definitely believe the validity of this hypothesis.  She's fascinating, and awesome, and inspiring, and a little bit of nightmare fuel.

As the technology develops, these actroids start to gain humanity, but still exhibit signs of that uncanny valley.  As we can see in this next video, filmed in 2008, spontaneous interaction with humans is far closer, but still has a ways to go:


It's not limited to robotics, either.  Cleverbot is a fascinating (and highly addictive) experiment with virtual intelligence.  Cleverbot interacts with you, responding with original "thought" to what you type into it.  It actually learns from interaction with humans, which is easily seen by asking it questions on socially popular topics.  Sometimes it seems you are interacting with a real human, but if you type long enough, the artificial intelligence will show through, sometimes sending you straight into that valley.

And that brings us to CGI, and the advances made in this field along these lines.  Let's meet Emily, a virtual intelligence made with computer graphics by Image Metrics:


She's almost perfect.  Very, very close to human, and in fact if you're not really looking for it you can easily mistake her for human.  Right up until that uncanny valley shows up and gives you subtle hints that something just isn't right with her.  You can't put your finger on it immediately, but it's there at the back of your mind.

Will we ever cross that uncanny valley with robotics, AI and CGI?  I'm certain we will.  The rate of technological advances is astounding, and is growing at an exponential rate.  I'm sure that in a few short years we'll be there, with this sort of AI interacting with us on a regular basis.  From search engines to GPS navigation systems to all kinds of interactive learning, the possibilities are endless.

For now, I think we're still in the valley, but we'll be coming out the other side before long.  And that's when things will start to get really scary.

Update:  I was asked by the fine folks at Curiosity Quills to do a follow-up piece to this post on the uncanny valley as it pertained to speculative fiction.  You can read it here on their blog.  It turned out well.  And by well, I mean it gives us a very chilling look at what could be a possible future for mankind.

Wednesday, August 10, 2011

Book Review: 1984

When reading or watching a work of speculative fiction, it's easy to see why the genre easily becomes dated and appears old fashioned or very simple in its basic premises of scientific innovation.  Science fiction films of the 1950's, 60's and 70's provide numerous examples.

Even some of the works of the last couple of decades appear outdated with the advances in technology we've seen.  Technological discoveries have increased exponentially over the last century, and even more so over the last few years.

This means a work of science fiction has a far greater chance of becoming dated even sooner than before.  So how is it that some are able to stand the test of time to become classics, still viable after years?  Let's take a look at a great example of one that has.

George Orwell's dystopian novel Nineteen Eighty-Four has been a classic for many years, and still stands as cutting social commentary in today's world.  Not only this, many of the unique words or phrases he used in the novel are a part of today's vernacular.  We use the terms "thought police", "doublespeak", "groupthink", and "Big Brother" nowadays without hardly a thought as to their origins.  In fact, even the author's name in the adjectival "Orwellian", has come to mean that of a totalitarian agenda, referring to revisionist history and manipulation of perception.  The book itself conveys thoughts of nationalism, surveillance, privacy, and censorship, topics which are very much at the forefront of today's headlines.  If anything, it becomes more and more valid as time goes by.  Not bad for a novel first published in 1949.

So how did he do it?  How did Orwell create such a masterpiece, that rings true and current even today and well into the future?  How did he create a work of science fiction that does not seem to age much at all, even with the relatively recent explosion of new technology?

He used themes which are at the core of every civilization, and which strike chords close to everyone on an individual level.  He made humanity the core element of his plot, with themes anyone can relate with.  He did not rely solely on technology to drive the plot.  And while technology does move the plot, with cameras and two-way television screens, the main force is of a very real human nature.  The real focus of the book was the nature of the relationship between a government and its civilians, and even more compelling, the way the government turned each and every one of its citizens into spies against the rest.

The bad guy as it turns out in the book is much more than the ubiquitous Big Brother.  While government entities under sanction of Big Brother are hard at work monitoring, censoring, and revising history, its very citizens are spying on each other.  Everyone is a willing participant in the persecution they themselves are subject to, because although they never really know who's watching, someone is always watching.  Whether it's an undercover agent of the Thought Police or a next door neighbor, when one is turned in for unacceptable behavior, it really doesn't matter who it was that turned them in.  This perpetuates the cycle, and ingrains it into the children of the society who are taught warped ideals and beliefs from an early age.

Nineteen Eighty-Four is a fascinating tale that strikes to the core of our sense of values, morals and humanity.  It gives us a horribly chilling view of a terrifying society at one extreme end of the spectrum, while offering a glimpse at the core of real humanity on a very personal level.

They say imitation is the sincerest form of flattery, and George Orwell's masterpiece is a prime example of this.  The book has been adapted a number of times in film, television, stage, radio, and many other media forms.  It's seen countless derivatives spring up over the years, and has been the inspiration behind huge numbers of creative works.  It's been a tremendous inspiration to me in my own writing, and I'm certain many other authors can say the same.

All in all, it's one of the best pieces of literature to come from the last century, and is something everyone should have on their bookshelf or in their e-reader.

Friday, August 5, 2011

Exploring Speculative Fiction

When I was younger, science fiction was pretty simple.  In my world, you had space travel, and you had time travel.  Astronauts, aliens, time travelers - there wasn't much else.  Other forms were out there in various forms, but that's what it seemed like to me.

Nowadays there are almost as many sub-categories of the genre as there are genres of fiction.  Scores of new sub-genres have sprung up over the years and obscure ones have expanded to the point where science fiction isn't even the over-arching category.  Really it's all speculative fiction, which encompasses science fiction, fantasy, horror, alternate history, utopian, dystopian, cyberpunk, apocalyptic, post-apocalyptic, the paranormal, superhuman or superhero, and anything which touches on things (so far) outside the realm of possibility, including those with fantasy and horror elements.

Straight science fiction, as I've heard it explained, is comprised of two basic elements.  Science or scientific principles must be key to moving the plot forward, and the basic underlying theme is one of humanity vs. technology.

Of course, there are many varied schools of thought on this, and many different definitions of what comprises science fiction, what its elements are, and how it is defined.  Pick your favorite.  They're equally valid.

I like my definition, because it's simple, and because it strikes to the core elements of the genre.  Because of this, it helps to define it more clearly in the reader's or viewer's eyes.

The Terminator demonstrates this school of thought nicely, and coincidentally also falls into one of those two categories of my childhood - time travel.  Science is a definite plot vehicle in this story, because without time travel - one of the key scientific elements - the plot is nonexistent.  Even more broadly, time travel and the existence of sentient technology are both vital elements of the plot.

Secondly, the underlying theme of the entire story, from the first movie through the last movie or television series, is one of man vs. machine.  It calls into question our self-destructive relationship with technology and provides a worst-case scenario of that relationship gone terribly wrong.

Many more of the classics can be viewed the same way.  Philip K. Dick's Do Androids Dream of Electric Sheep?, or the more widely known movie adaptation, Bladerunner is one such story.  If it wasn't for androids and their attempts to return to earth from Martian colonies, there would be no plot.  And this story more so than almost any other speaks directly to the contradictory relationship between humanity and technology.

The speculative fiction genre has exploded in every direction.  Bruce Bethke and William Gibson put cyberpunk firmly on the map.  Space westerns and stories of space colonization are becoming more and more common following such stories as Joss Whedon's Firefly and Serenity, and the more recent Avatar and Cowboys vs. Aliens, and the reboots of the classic Planet of the Apes movie series.  Virtual reality remains a common element, with movies like The Matrix and sequels, and the Tron reboot continuing to expand the sub-genre.

One of the keys to speculative fiction is that while it must remain fresh and believable, it is very perishable in nature.  A story that seems new and postmodern at its debut seems antiquated and outdated after a few years of real technological advances.  If you're looking for proof of this, dig out that old video cassette recording of Lawnmower Man, or watch some of the 1950's science fiction reruns.  See what I mean?  Archaic!

To a science fiction author, this means keeping on the cutting edge.  It means constantly struggling to keep up with the latest moves in technology, and trends of where society is heading.  It means consistently updating manuscripts and rewriting outdated material with subsequent edits.  It's a tough job, but a fun one.  And with so many different and exciting possibilities in the world of speculative fiction, it's one I wouldn't trade for the world.

Tuesday, August 2, 2011

Fear, Uncertainty, & Doubt: Guest Post by David Gaughran

I've been wanting to branch out and do something a little different for a while, and one of those things was to invite another author to do a guest blog here.  I think it brings a lot more to the blog than just another opinion or perspective.  It also gives me a chance to showcase someone else here and introduce you to them and their work.

Today's guest post is by David Gaughran, an up-and-coming author, blogger, and self-proclaimed proponent of self-publishing.  He has so far published two shorter works of fiction and a nonfiction book on self-publishing, which I have read and highly recommend.  Here are his thoughts on self-publishing, with some good advice based on his own experiences.


*   *   *

There is a lot of disinformation out there about self-publishing. I chose that word carefully. Some people are consciously spreading inaccurate information about self-publishing to steer writers away from it.

In the software industry, they called this FUD: fear, uncertainty, and doubt. The idea was that you would create enough question marks about a competitor's product so that the customer would stick with yours.

Those who are vigorously defending the status quo, you will find, are those that have the most to lose from it changing.

However, I'm not interested in assigning blame. I'm interested in writers getting accurate information about all the new opportunities that are presenting themselves.

Prior to 2007, a writer had one viable choice: pursuing a publishing contract. Self-publishing existed, but publishers had a lock on the distribution system. Self-publishers found it next-to-impossible to get their books in stores.

Also, to publish a print book at a price that could compete with publishers meant taking the risk of splashing out on a print run, storage space for all those books, and coming up with some way of selling to customers directly. Not easy, and a lot of people lost money.

That all changed when Amazon launched their digital self-publishing platform. Suddenly, publishers no longer had a lock on the distribution network. Plus, e-books were far cheaper to produce. There were still some costs, mainly cover design and editing, but those costs only had to be covered once - there was no extra fee for going back to the printer.

When e-books really took off in November 2010, a lot of writers began to consider self-publishing for the first time. While the first people to make real money were those that previously had a successful career in trade publishing, such as Joe Konrath and Scott Nicholson, new stars such as Amanda Hocking, John Locke, Mark Edwards, and Victorine Lieske emerged.

In addition to them, writers such as Louise Voss, J Carson Black, and Bob Mayer, switched from trade publishing to self-publishing and started to make more money. A lot more money.

By now, self-publishing has proved itself as a viable career path for unpublished writers, those who have had a successful trade publishing career, and those that haven’t.

Bob Mayer, for example, made the NYT Bestseller list twice, and shifted over 1 million copies of his Atlantis series alone for his publishers. He is making more now on his own. In July alone, he made $100,000 from self-published work.

Some people might say that all of these people are exceptions to the rule, that only a tiny percentage will succeed. But isn’t this true of trade publishing? What percentage of any agent’s slushpile will make it onto the bookshelves at Barnes & Noble? What percentage will get any kind of deal at all?

One of the more common tactics to scare people away from self-publishing is to tell them that no agent or publisher will ever touch them if they go that route. Someone should really mention that to all the agents hunting in the Kindle Store for new clients.

I know of ten self-publishers that have been approached by agents in the last few months, just from hanging out on Kindle Boards. One agency alone – Trident – has signed five self-publishers that I know of this year. Every few weeks, I hear of another self-publisher that has been approached directly to sell foreign rights to their work.

I think we can say that it’s clear that self-publishing is a viable path. But is it the best path for you and your work?

That is a question that each writer will have to answer for themselves. But what writers need to understand is that it’s not either/or. Many self-publishers I know also have trade deals for some of their work. Many in trade publishing are self-publishing “side projects” such as reverted backlist titles, short stories, or novels they were unable to place.

I think this kind of “mixed portfolio” will become more common, not less common, and in fact I think it’s a prudent approach as you will get the best of both worlds: the higher royalties from self-publishing, and the audience expansion into print that’s so hard to achieve on your own.

Barry Eisler walked away from a huge trade deal to self-publish. He released three titles, then signed a trade deal with Amazon, for one book, and has indicated he will be self-publishing further titles in the future. J Carson Black just signed a 3 book deal with Amazon. She will be releasing two self-published titles in the Fall. Amanda Hocking signed a huge trade deal with St. Martin’s Press. She will be continuing to self-publish other work. Michael J Sullivan signed a six book deal with Orbit. He will also be self-publishing.

The point is, they are not mutually exclusive paths. You can self-publish some work, and pursue trade deals for other projects.

Back in March, when I was really struggling to decide whether to pull my novel from the remaining agents that were considering it and self-publish it, I didn’t know this. It was only when I realised that I could self-publish a couple of stories – as an experiment – that I broke the impasse.

And that’s exactly what I did. I didn’t have to pull my novel. It remained with the agents. And I self-published. Two months later, I had sold over 200 books. I pulled the novel.

So for anyone unsure about self-publishing, for anyone that doesn’t know if it’s something they would enjoy, or something that could work for them, I suggest doing the same. Self-publish a short story. See if you enjoy the process. Maybe you will, maybe you won’t.

If you do, then you can consider publishing your novel that way. If you don’t, you’ve lost nothing other than the minimal cost involved in publishing a short, and you will at least have learned something.

I’m a huge convert to self-publishing. If you had asked me about it six months ago, I would have thought it was only viable for writers with a sizeable backlist of reverted titles. I don’t think that anymore.

But, like most self-publishers, I wouldn’t say no to a trade deal if the terms were right. However, the difference now is that if I am approached by an agent or publisher, I will be dealing from a position of strength. I made $425 last month. From self-publishing. It’s only my third month. I haven’t even got my novel out yet.

And if a publisher approached me tomorrow, and made an offer on my novel, I know exactly what the minimum terms I would accept are. I have sales records. I have built a platform. I have a rough idea of how many books I could sell.

When I was in the slushpile, I would have taken anything. Now, instead, I know the value of my work. I’m making money. And I’m writing more than ever because the joy is back.

Chasing an agent is such a grind. It’s such a negative experience. Self-publishing has been nothing but positive. I’m back in control of my life and my career. And I’m having a blast.

Self-publishing might not be for everyone. But I think even that is looking at it the wrong way. You aren’t making a career choice which is tying you down for life, closing doors. You are making a decision on one book, or story. It’s not binding.

And if you do it right, you might find that it opens doors.

- David Gaughran

*   *   *

Many thanks to Dave for sharing that post here.  He's got a few things figured out that authors, self-published or otherwise, would do well to heed.  I wish him the best of success in his own endeavors.  His blog, Let's Get Digital, is a great source of information for both writers and readers alike, and consistently has news and current information on the world of publishing.  You can find his books on Amazon.com, as well as other digital book sources on the web.