• STACKED
  • About Us
  • Categories
    • Audiobooks
    • Book Lists
      • Debut YA Novels
      • Get Genrefied
      • On The Radar
    • Cover Designs
      • Cover Doubles
      • Cover Redesigns
      • Cover Trends
    • Feminism
      • Feminism For The Real World Anthology
      • Size Acceptance
    • In The Library
      • Challenges & Censorship
      • Collection Development
      • Discussion and Resource Guides
      • Readers Advisory
    • Professional Development
      • Book Awards
      • Conferences
    • The Publishing World
      • Data & Stats
    • Reading Life and Habits
    • Romance
    • Young Adult
  • Reviews + Features
    • About The Girls Series
    • Author Interviews
    • Contemporary YA Series
      • Contemporary Week 2012
      • Contemporary Week 2013
      • Contemporary Week 2014
    • Guest Posts
    • Link Round-Ups
      • Book Riot
    • Readers Advisory Week
    • Reviews
      • Adult
      • Audiobooks
      • Graphic Novels
      • Non-Fiction
      • Picture Books
      • YA Fiction
    • So You Want to Read YA Series
  • Review Policy

STACKED

books

  • STACKED
  • About Us
  • Categories
    • Audiobooks
    • Book Lists
      • Debut YA Novels
      • Get Genrefied
      • On The Radar
    • Cover Designs
      • Cover Doubles
      • Cover Redesigns
      • Cover Trends
    • Feminism
      • Feminism For The Real World Anthology
      • Size Acceptance
    • In The Library
      • Challenges & Censorship
      • Collection Development
      • Discussion and Resource Guides
      • Readers Advisory
    • Professional Development
      • Book Awards
      • Conferences
    • The Publishing World
      • Data & Stats
    • Reading Life and Habits
    • Romance
    • Young Adult
  • Reviews + Features
    • About The Girls Series
    • Author Interviews
    • Contemporary YA Series
      • Contemporary Week 2012
      • Contemporary Week 2013
      • Contemporary Week 2014
    • Guest Posts
    • Link Round-Ups
      • Book Riot
    • Readers Advisory Week
    • Reviews
      • Adult
      • Audiobooks
      • Graphic Novels
      • Non-Fiction
      • Picture Books
      • YA Fiction
    • So You Want to Read YA Series
  • Review Policy

A Closer Look at The New York Times YA Bestsellers List, Part 1

November 4, 2013 |

*This is part one of a two-part post. Part two will publish tomorrow.

How often do we hear that YA is full of women? That this is a land where there aren’t boys or men? That readers and writers are girls and the implications of what that might mean?

I’ve been thinking about this a lot lately. And I decided it was time to finally sit down and look at one of the most well-known and highly-revered tools that the book world looks to when it comes to status and acclaim: The New York Times Bestsellers List.

Before diving into the data and making some connections among the things I saw, I thought I’d break down the NYT list a little bit. You may or may not remember that last December, the Times decided they were going to change up how they handled their Children’s lists. It used to be that all Children’s books were on one list, thereby having 10 spots for books published among all categories for children. The change that was made ended up splitting middle grade and YA from the general Children’s list, giving them their own lists. This offered more spots for books within those categories, and with the extension of the list, there are 15 books labeled “Bestsellers” each week.

In addition to those lists, there are series lists. When (theoretically) there are three books published within a given series of books, the books within that series are tied together and placed on the series list, rather than offered individual spots within the overall list.

The NYT list tracks sales within a given week and highest sales correlate to placement on the list. For YA (and MG), e-book sales are included in the totals, whereas children’s sales are not. Keep this in mind with the data below because I think that those e-sales play a role in what emerges. Sales numbers are reported from big retailers, online and off, and rather than rehash all of the details of how it works out, it’s worth reading the Wikipedia article to know what does and doesn’t count, as well as the controversies surrounding the counting — there are enough citations here to fill you with all kinds of glee, don’t worry.

I’ve focused all of my number-crunching on only the YA list. There is nothing in here about the MG list and nothing in here about the series list, except where I’ve chosen to make it a relevant point.  There is a big one, too. In total, I looked at 47 bestseller lists from the NYT, which is everything from the beginning of their separate list through the week of 11/5/13.

What you should know if you don’t already is that the lists are printed two weeks in advance of the sales. In other words, the list for November 5 covers the sales for the week of October 13 through 20. Knowing this will contextualize much of what you see later on in the data, as books which are published on a certain date that would no doubt make the NYT List don’t do so immediately — there is a two-week lag in the reporting.

The reason that I pulled November 5 as my final counting date for the data is simple: the list for November 12 is one that changes the game, as it’s the week when sales for Veronica Roth’s Allegiant come through, and thus it’s the week when the two spots she’s held on the list for YA are opened up. Her series, now three books, leaps over to the series list instead. More on that in a bit.

Other notes I made in my data collecting I should share: when there were author teams, each of the authors counted. So when Maureen Johnson and Cassie Clare’s book made the NYT list, each of them were tallied for women. The single exception to this came through a personal judgment call you may or may not agree with, but which changes the data and results very little. That exception came for books written by James Patterson and a co-author. I chose not to count the co-author because, as anyone who works with readers will tell you, it’s Patterson who is the author. Readers do not ask for the Maxine Paetro book. They ask for the newest Patterson. Without his name leading the book, there is little doubt in my mind that that book would not make the list. I did also make an exception for the Gabrielle Douglas book, which is authored by her but was written “with” someone else. I did not count the person it was written “with,” since “with” indicates something different than “by.”

As in the past on data posts, there is a LOT to sift through here. I like numbers and correlations among them. I like to take guesses and talk about what I’m seeing and what I think it means. I invite you to do the same thing. As should be noted, I’ve rounded all of my numbers to the nearest whole to make the data easier to read.

You can see my raw data, with some miscellaneous notes-to-self, here.

Individual Gender Representation on the NYT YA List


The first thing I wanted to know was how well men and women were represented on the lists. I’ve always suspected that men outnumbered women on the list, and when I’ve made that claim before, I’ve been told that’s not true.

But actually, it’s startlingly true.

Starting on the very basic level, I counted up the number of different men and the number of different women who occupied a space within the NYT YA List. Because I wanted to cast the widest net possible from the beginning, I looked at not just the top ten list, but the extended list of 15.

On average, there were 7 men on the list and 4 women. Again, these are the number of individuals, so authors who appeared more than once were only counted one time.

Let’s look at this more granularly.

This is a week-by-week comparison of the number of men represented individually on the NYT List in blue against the number of women represented individually on the NYT list in red. Note that except for a few scant weeks in the middle of the chart — that would be around May and June — men appear more frequently on the list.

So how does the top ten fare when it comes to individual gender representation? If we remove the extended list, will women show up more frequently?

Not so much. In fact, the top ten list is even more disappointing to look at if you’re looking for the proof of women dominating YA.

On average, men appeared 5 times on the top ten for YA, while women represented a scant 2. Again, these are the number of individuals, so authors who appeared more than once were only counted on time. 
So more granularly:
As should be absolutely clear, there has never been a time women have outnumbered men on the NYT List in the top ten. Never. There have been six weeks where there have been a grand total of four women in the top ten — January 6, February 24, March 31, May 5, May 19, and June 9. For the weeks of May 5 and June 9, we had female coauthors on the top ten (each counted as an individual) and for the weeks of Mach 31 and June 9, Abbi Glines and Nicole Reed — both “new adult” authors whose books were listed for readers 17 and older — also occupied spots on the top ten. 
It gets more interesting if you look at how few spots individual women have had on the top ten list. There have been nine weeks when only one woman has had a spot on the top ten. That woman is, of course, Veronica Roth. 
The fewest number of men to occupy space on the top ten list is 3, and that has only happened a grand total of five times. 

Books and Gender Representation on the NYT YA Bestseller List


The charts above looked at the individuals represented on the lists. So despite an author having more than one book on the list, she or he only counted once.

In addition to looking at the unique frequencies of gender on the list, I decided to do a more thorough count of the books and their authors by gender to get a truer sense of how men and women occupied the list. This time, every book was looked at individually, rather than every author. Multiple books by the same author were counted each time in their respective author’s gender column.

It got even less pretty for women who “dominate” YA.

First, looking at the books with the use of the extended list, here’s what our averages are when it comes to spots that men have on the list and spots that women have on it:

On average, 9 of the books on the list are written by men and 6 are written by women.

Let’s see what this looks like on a week-by-week basis, too.

So about the fact that women dominate YA, take a hard look at this data.

Every single week — except for two — men have outnumbered women on the NYT List. Those were the weeks of March 31 and May 19, 2013. But before you get excited thinking that women had finally “taken over” in their representation on the list, I’ll report to you that they didn’t take over in numbers. Those weeks showed five individual women on the list, which is a number still smaller than the average number of men who appeared on a weekly basis. Women “dominated” as individuals none of the time.

Those five individual women on the list represented a grand total of 8 books on the 15-book extended list on those two weeks.

Eight books.

Five women.

Fifteen spots.

For two weeks out of forty-seven total.

Just sit with that for a few minutes.

There are, of course, some variables that made these two weeks of “lady domination” happen. For the week of March 31, we see the debut of Eleanor and Park, nearly a month after its release. But the sales for the book that week most certainly reflect the John Green review in The New York Times — the review was published March 10, which is the week that the March 31 NYT List covers here. Without devaluing the book and its merits, it’s fascinating to see that the book ended up on the List nearly a month after publishing but immediately following the glowing review it received in the same publication by a male who himself regularly occupies 3 or 4 spots on the list. It would be hard to argue that without the Green review that book would have landed on the list that week.

That week also includes Nicole Reed’s “new adult” book (which I argue should not have been on this list at all because of its 17+ age recommendation and the fact it’s self-published — there is a whole other list for that). There were also two spots on the extended list held by Cassie Clare; the sales for this week were one week prior to the paperback release of Clockwork Prince, and I have some theories which I’ll get to in a bit.

For the week of May 19, I note that Abbi Glines was on the list (who, again, I argue should not have been since her book was for a 17+ audience). There are also dual appearances by Kiera Cass, Marie Lu, Veronica Roth, and an appearance by Sarah Dessen for What Happened to Goodbye.

Because I looked at the data of individuals limited to the top ten list, I decided to do the same thing with the books. So the following charts look at gender representation in the top ten by books. Same deal: multiple books by the same author were gender coded multiple times.

On average, there were 7 books written by men in the top ten of the NYT List and 3 by women.

More granularly, so you can see what it looks like on a week-by-week basis:

Books written by women have never once — never once — had at least half of the spaces on the top ten list. They’ve had a few weeks occupying four spaces but never have they had five books in the top ten slots in the 47 weeks that the YA List has existed.

A couple of other factoids to include at this juncture: there have only been five weeks where a woman held the number one spot on the New York Times List for YA. Five. They were held by Veronica Roth (for four weeks — three of which were in mid-July, on the 14th, 21st and 28th, which would reflect a bump in sales immediately following the release of the first stills of the movie and the fourth week, September 15, likely reflects sales following the release of the film’s trailer) and Kiera Cass for The Elite, which stayed for one week only. Cass’s novel debuted at #1 on the May 12 list, which reflects the sales for the week her book was available for purchase.

Again, in 47 weeks, there have only been two women to see the top spot. They only held it for a combined five weeks.

Average Length of Stay on the NYT List


This data is much trickier, it’s limited to the top ten list, and it doesn’t really say anything. But I wanted to look at it for comparison with the next data set after this one.

Because the list is only 47 weeks long, we only have 47 weeks to compare average length of stay against. And it moves backwards, of course: so the books which were on the list during week one had an average stay of one week. Those which were on the list during week 23 which had been there since week 1 now averaged 23 weeks. And so on.

I wanted to look at gender against the average stay of books on the list. But it wasn’t really too telling of anything. Part of that is because the top ten list had a higher number of male authors on the list, though only a handful were around the entire 47 weeks (many jumped from the top ten list to the extended list then back again). And really, comparing 8 books by men’s average stay on the list against 2 books by Veronica Roth which have been on the list the entire time didn’t show a whole lot.

But it will tell us something soon.

For men, the average length of stay on the NYT YA List was 18 weeks.
For women, it was 17 weeks.

Keep those numbers in your head in conjunction with everything above.



Trends Within the NYT YA List

Tomorrow’s post will look at some more data, including data from the first NYT List that will publish after Veronica Roth’s series is off the YA list and onto the series list. It’ll also look at publishers represented on the list and other interesting variables. But before ending this post, I thought it would be worth talking a little bit about some of the interesting trends I noticed.

I noted above more than once that lists reflect the sales two weeks prior to their publication date. That’s something to keep in mind when you notice things like drastic e-book sales for certain titles or authors. Because the YA list does reflect e-book sales, I have been curious to know what impact that makes on who is on the list and who is getting those books on the list.

Ever notice huge slashes in e-book prices? I haven’t kept track of them this year, and can’t make any certain connections, but it seems to me there’s something to be said about the appearance of some books on the list which might reflect those drastic cost reductions. Publishers can set the price of a Kindle or Nook Book at $1.99, drum up huge sales, get the book on the list, and then it’s a bestseller. Doing that prior to a paperback release or a release of the next book within a series would help that title appear on the list. A lot of times those books appear for that week and then they fall off again when the price returns to something higher.

It is not price that gets books on the list; it’s the number of sales.

Keep that in the back of your mind when the list for November 20 comes out, as last week a number of well-known YA names had their e-book prices dropped to $1.99 or even $1.40. I wouldn’t be surprised to see Rainbow Rowell appear on the list twice, for example, since both of her books were dropped to a mere $1.40 for Kindle. I’d even not be surprised to see one or both of them in the top ten lists (I should note here that on the first post-Veronica Roth list, which I’ll talk about tomorrow, Eleanor and Park is on the extended).

Many books will appear for a week or two on the list — usually on the list reflecting the sales of the week their new book published — and then they will slide off. This happens in more drastic numbers for female authors than male authors, which seems like it’s reflected in the data above. And when you look at the titles that do this, it’s hard not to think about why that might be. Is there a reason Kiera Cass’s The Elite debuted at #1, lasted a second week at #6, moved to the extended, and then disappeared? It seems to me perhaps this sort of quick on and off reflects the true audience purchasing the book, and the true audience for this one? Teenagers. They get very excited about the release of the next book in a series, buy it during release week or immediately after, and then the sales fall off for any number of reasons. Books that are perennially on the NYT List definitely reflect teen sales, too, but I suspect part of why they maintain their positions is continued purchasing and recommendation from adult readers as “good YA.” Even if it’s only 55% of adults purchasing YA (which has been spun to sound like it’s adults being huge purchasers of YA books when it’s only slightly more than half of purchases are made by adults), it’s likely that those adults recommended books to other adults are doing what adults who recommend books tend to do: repeat and recommend the books they read from the bestsellers list because that bestsellers list suggests a “good book.” A bestseller is a bestseller for a reason, as the logic goes.

They’re safe.

Books written by women are much more likely to see one week or two weeks on the list and then fall off than those by men. It’s depressing to think about what that might say about the value of women in YA fiction, the reflection of their work as having significant merit, and so on and so forth. But one thing is for certain: the assertion that “women dominate” is completely false, at least when it comes to Bestsellerdom.

Men do.

Filed Under: data, Data & Stats, gender, new york times bestsellers, Uncategorized, Young Adult

“Best of” 2012 Lists Revisited: How Do YALSA’s “Best of” Lists Compare?

February 7, 2013 |

Back in December, I did a huge post looking at the annual trade review journal “best of” lists, looking at a number of different elements of those books. After looking at those numbers, I was curious to see what and how there were any worthwhile comparisons to make against YALSA’s annual award and selection lists, including the 2013 Printz, Morris, Best Fiction for Young Adults (BFYA) and Quick Picks (QP). So I did some more comparisons.

A few caveats before diving in: there were 89 titles on the “best of” lists. Those “best of” lists came from Horn Book, School Library Journal, Library Journal (which is not “best of” YA fiction, but best YA fiction for adult readers), Kirkus, and Publishers Weekly. I did not go back into those numbers and add the books that made an appearance on the Bulletin’s “best of,” which came out on January 1. You can read that list here.
I’ve stuck to looking at only the books on those “best of” looks when comparing to YALSA lists for a few reasons. The first is that it’s a small sample and it’s broad, especially in light of Kirkus choosing to name so many titles on their best of list (though note that their editor was a member of the Morris committee). The second is that both the BFYA list and the QP list allow for titles to appear that came out in part of the year prior — for BFYA, titles published September – December 2011 were eligible for this year’s list, and for QP, titles published July – December 2011 were eligible for this year’s list. By sticking to the “best of” 2012 lists, I know I’ve got just the 2012 titles. I’ve also only looked at fiction titles. 
Like in the prior post, information about starred reviews comes from Horn Book, SLJ, Booklist, PW, BCCB, and Kirkus. I’ve pulled that information from Whitney’s amazing roundup of starred reviews. This means that only books with two or more stars have those stars noted, though in the case of my first data set on BFYA/QP crossover titles, I looked up the books that had one starred review via Publishers Weekly’s roundup of starred reviews. 
This post is full of a lot of numbers and a lot of information. It isn’t meant to convey anything but that information. If you see any glaring mathematical errors, feel free to let me know, but I think it’s fairly solid.
So first and foremost, let’s talk just about the YALSA BFYA list and the QP list. I really like to think about those titles which make both the BFYA and the QP list because there’s something to be said about them — these are books that are not only highly appealing to teens, but these are books that are well-written and among the best of the best of fiction in that given year. 
There were a total of nine books that made both lists this year:
  • Me and Earl and the Dying Girl by Jesse Andrews (Abrams)*
  • Croak by Gina Damico (Houghton Mifflin)
  • Something Like Normal by Trish Doller (Bloomsbury)
  • Bad Boy by Dream Jordan (St Martins Griffin)
  • Island of Thieves by Josh Lacey (Houghton Mifflin)
  • I Hunt Killers by Barry Lyga (Little Brown)**
  • This is Not a Test by Courtney Summers (St Martins Griffin)** 
  • The Final Four by Paul Volponi (Penguin/Viking)
  • Beneath a Meth Moon by Jacqueline Woodson (Penguin/Nancy Paulson)**
* indicates the title made the BFYA Top Ten
** indicates the title made the QP Top Ten
Of those ten titles, three were included among this year’s “best of” titles in the trade journals. Those were Me and Earl and the Dying Girl by Jesse Andrews (on Kirkus’s list) and Beneath a Meth Moon by Jacqueline Woodson (also on Kirkus’s list).  
In terms of starred reviews among these nine titles, here’s a handy chart:

Worth noting is that Andrews’s book (2 stars) and Woodson’s book (3 stars), as mentioned above, were included on “best of” trade journal lists. But, there were two titles earning more than one star and spots on both the BFYA and QP lists which were absent from any of the best of lists: This is Not a Test by Courtney Summers (2 stars and a Top Ten QP) and Final Four by Paul Volponi (3 stars and earned some discussion over at the Someday My Printz blog, which notes it had 4 stars — I’m assuming they’re talking about a VOYA “star” as the 4th).


I Hunt Killers got one starred review and earned a QP Top Ten spot.
I talked about format — hardcover vs paperback original — in my first post. Looking at these nine titles, I was curious whether there were any noteworthy things about that to tease out. And indeed!

This is obviously a very small sample size, but a full 1/3 of those overlapping titles were published as paperback originals. In the original data set, of the 89 “best of” titles, only 3 were paperback originals and 2 were split-runs. Taking them together, that would be 5 of the 89 books were paperback prints, amounting to roughly 6% of the total. Could there be something appealing about the paperback format for teen readers? Maybe.

The paperback originals, for anyone interested, were Croak, Bad Boy, and This is Not a Test.

Just for fun, here are debut novels making both BFYA and QP lists:

So again, 1/3 of those overlapping titles were debut novels. In the “best of” data, roughly 20% of the titles were debut novels. 
For the data nerds, why not also look at the release dates of these overlapping BFYA/QP titles, too? I did it in the original “best of” analysis. Note, as stated above, that because BFYA and QP allow for titles in the prior year to be considered for their current year’s list, these tend to weight more favorably toward earlier publication dates. In other words, books published between July and December for QP and those published between September and December for BFYA are less likely to appear than those published earlier in the year because they are eligible in the following year, as well.

Only four months were represented here: February (2), March (3), April (1), and June (3). Again, it’s a tiny sample but interesting to look at, especially in light of how the “best of” lists played out in the trade journals, where the books published in June actually represented the some of the FEWEST spots on the lists.

How about a little breakdown of what the BFYA list is itself composed of? There are a total of 112 titles by my math (the list says 102 titles, but I counted differently). I looked at both the titles published in the latter half of 2011 and those in 2012 — this data is inclusive of the entire list. Of those titles, what’s the breakdown of author gender?

Of the 115 authors — there are three books written by duos — here’s what it looks like:

That breaks down to 86 female authors and 29 male authors. 25% of the authors were male.

I also looked at the breakdown of series and stand alone novels. Caveat here: I did not include Paolo Bacigalupi’s The Drowned Cities nor Elizabeth Wein’s Code Name Verity in the series count; Bacigalupi’s is a companion and Wein’s companion was named after the original title published, so I didn’t think it technically counted.

There were 86 stand alone titles and 26 titles that were part of a series in the BFYA list.

What about the breakdown of debut and more seasoned authors?

There were a total of 93 non-debut authors and a total of 29 debut authors on the BFYA list. The debut authors accounted for about 25% of the total list.

And data nerds looking for paperback original publications against hardcovers?

There were a total of 5 paperback originals — Beautiful Music for Ugly Children, Croak, Bad Boy, Speechless, and This is Not a Test.

When I originally did the paperback/hardcover/split run data for the “best of” list data, Aristotle and Dante Discover the Secrets of the Universe was a split run title. I found the paperback edition on Barnes and Noble (and hardcover on Amazon). Now I can only find the paperback as unavailable (without a date) on Barnes and Noble. It did have a date in Amazon as April availability and one in Target as a February availability for paperback. I have a feeling the paperback release date for April will be of a reprint edition of the original paperback, but this time with the awards on the cover — in other words, they will do a more formalized paperback run this go around than when they did the split run. I’ve included it as the single split run title in this data for consistency’s sake.

There were a total of 106 hardcovers.

The last data I looked at for the BFYA was what publishers were represented. This chart is harder to read, so I’ll pull out the interesting bits below.

I compressed all of the imprints into their respective houses in this data, so Tor/St Martins Press/FSG and so forth are all beneath Macmillan. Note that Hachette refers to Little Brown Books for Young Readers. Random House had the most BFYA titles, with 14 represented. Following Random House was Macmillan, with 12 titles, then Penguin and Harper with 11 each. Candlewick held its own with 8 titles.

Since looking at the overlapping BFYA/QP titles and then the BFYA titles alone wasn’t enough, I decided to dive into the QP titles individually. There are a few important caveats: I did not look at the non-fiction titles on QP. I also did not include books that were on the list as a series — so, the Chris Lynch books, the Megan Atwood books, and the “Travel Team” series were off limits. This was done to save sanity and level the playing field in terms of data. All told, I looked at 46 QP titles.

Of those 46 QP titles, how did gender play out? There were 47 authors total, due to a writing duo.*

There were 18 male authors and 29 female. This breaks down into 38% of the authors being male. Compare that to the 25% ratio for BFYA books.

Another interest data set for the QP titles was the paperback and hardcover breakdown.

There were 12 paperback originals of the 46 total. That’s a much larger percentage than BFYA, and I would think much due in part to the Orca books represented on the list (more on that in a second).

How about the debut authors and the more seasoned writers?

There were 38 non-debut authors and 9 debuts. 19% of the authors were debut for the QP list. This is a smaller percentage than those on the BFYA list. Part might be in due to the Morris award titles on BFYA, which will be discussed further below.

And because now I’ve set the bar high, here’s how those QP titles break down by publisher. Note that Hachette refers to Little Brown Books for Young Readers. Again, imprints have been collapsed into their bigger houses.

It’s hard to read, but far and away, Macmillan had the most titles on the QP list, with 9 titles. The next closest was Penguin, with 5 titles total. Orca, which specializes in high appeal titles, made a good showing here as well. Most of their titles are paperback originals, as noted above. They had 4 titles on the QP list.

***
Now that I’ve looked at the data for those BFYA/QP overlapping titles, as well as those lists individually, let’s look at some other numbers. In this round, I only looked at the books which were among the 89 titles represented in the trade journal “best of” lists. All of the caveats and notes regarding where that information came from is at the top of this post. 
First, the Morris Finalists — Wonder Show, After the Snow, Love and Other Perishable Items, The Miseducation of Cameron Post and Award winner Seraphina.
  • These titles earned a combined total of 15 starred reviews. Seraphina earned 6, followed by 4 for Cameron Post, 3 for After the Snow, and one star each for Wonder Show and Love and Other Perishable Items. 
  • These titles earned a total of 8 “best of” list placements. Again, Seraphina took the lead with three, followed by Cameron Post with 2, and one place each for the remaining titles.
  • Seraphina was named a BFYA top ten book. 
  • Two of the titles did not make the BFYA list at all: After the Snow and Love and Other Perishable Items. Worth noting, though, that Love is eligible for next year. After the Snow is not. 
  • None of these books were on the QP list. Only one is eligible next year. 
How about the Printz honors and winner? Those titles earning honors were Dodger, Code Name Verity, The White Bicycle, Aristotle and Dante Discover the Secrets of the Universe, and the winner was In Darkness. 
  • These titles earned a combined total of 16 starred reviews. Dodger and Code Name Verity each earned 6 starred reviews. Both Aristotle and Dante and In Darkness earned two starred reviews each. White Bicycle is no where to be found, except for a single review written for Booklist by the Booklist consultant to the Printz committee. 
  • These titles earned a total of 12 “best of” list placements. Code Name Verity took top honors with 5, followed by Dodger on three, and two “best of” placements each for Aristotle and Dante and In Darkness. Again, no White Bicycle to be found. 
  • Code Name Verity, Dodger, and Aristotle and Dante were all named BFYA Top Ten titles. In Darkness earned a spot on the BFYA, as well. There is no White Bicycle to be found on the BFYA list, but it is eligible for next year’s list.
  • White Bicycle is the only paperback original. It’s the third book in a series of stand alone titles. It’s from a small Canadian press. 
  • None of these books were on the QP list. Only one is eligible next year: The White Bicycle. 
Let’s look broader now at the 89 “best of” titles and how they did when it came to earning spots on this year’s BFYA list. First, every single one of the BFYA Top Ten titles was on at least one “best of” list. I wanted to make a nice chart for this, but I can’t get it to work out like I want to, so more bullet points ahead.

  • Of the 89 total “best of” titles, 48 went on to earn a spot on BFYA. Now again, some will be eligible next year. Of the books that did not earn a spot on BFYA this year, 15 are eligible next year. Those are Son, Summer of the Mariposas, Love and Other Perishable Items, The Crimson Crown, Assassin’s Curse, Reached, The FitzOsbornes at War, Vessel, Pinned, Stormdancer, Be My Enemy, Broken Lands, This is Not Forgiveness, Passenger, and Passion Blue.
  • Of the titles on the “best of” lists and on BFYA, a combined 36 starred reviews were earned and a total of 36 starred reviews were earned and a total of 24 “best of” list spots were earned. Code Name Verity, The Raven Boys, and Seraphina earned six starred reviews each, followed by 4 starred reviews for Never Fall Down, 3 each for The Diviners and Every Day, and 2 starred reviews for the remaining titles, Aristotle and Dante, Me and Earl and the Dying Girl, Enchanted, and Boy21. In terms of appearances on “best of” lists, Code Name Verity earned 5 spots, followed by four for The Diviners, 3 each for The Raven Boys, Seraphina, and Every Day, 2 for Aristotle and Dante, and one list spot for each of the remaining titles.
  • Of the “best of” titles, only three of the 89 made the QP list. Those were Me and Earl and the Dying Girl, Beneath a Meth Moon, and Girl of Nightmares (which was absent from BFYA all together). 
Another interesting note in terms of the BFYA/QP lists I wanted to point out: at the teen feedback session for BFYA that I sat in on, the teens talked a lot about how much they loved Jennifer E. Smith’s book The Statistical Probability of Love at First Sight. It is absent from both the BFYA list and the QP list. It’s not eligible next year. 
I’m sure there are a million other ways to slice and dice this data. I could look at release dates and list making. I could look at genre or debut status across “best of” lists and the BFYA/QP lists. It’d be interesting to see what the starred reviews looked like for all of the BFYA/QP titles. But I think with what’s up here, there’s plenty to think about and chew on. And I’ll bring it all back to this: different “best of” lists look at entirely different things. It’s fascinating to me how titles which make both the BFYA and QP list and earn starred reviews can be missing entirely from the “best of” trade journal lists. Likewise, it’s fascinating that titles that were Morris honors can be absent from BFYA entirely, too. 
Were there any surprises here? Any additional thoughts? I’d love to hear.

* Worth noting — Andrew Karre pointed out to me a couple additional things worth noting here. Some of the QP authors may be using pseudonyms, so my numbers here on debuts and gender are based on my looking up the names as they are and my most educated guessing in some instances. Likewise, Orca, Darby Creek, and Saddleback titles come out as “simos,” meaning in paperback and library hardcover editions. I left the data as it is in terms of hardcover and paperback, since library hardcovers aren’t generally sold to the general public (whereas you can more readily purchase the paperback at an online retailer). 

Filed Under: best of list, Data & Stats, Uncategorized

“Best of 2012 YA” List Breakdown, Part 2

December 14, 2012 |

Last year on The Hub, I broke down the “best of” lists into a number of different factors. Yesterday, I revisited that post with this year’s “Best of” lists.* I looked at the following: authors by gender, debut authors vs. non-debut authors, gender of debut authors, genres representation in the “best of” lists, as well as the frequency by which books appeared on “best of” lists. But because I am an overachiever and love looking at data, I didn’t stop with what I posted there. I looked at several additional factors within the “best of” lists. I’ve again included a graphs for your viewing pleasure. 
I documented the titles appearing on Horn Book, School Library Journal, Kirkus, and Publishers Weekly‘s “best of” lists. Last year, I did not include information from Library Journal, but I have decided to include it this year (though note that Library Journal’s “best of” list for YA titles is called Best Young Adult Literature for Adults).
There are a number of important comments to make before showing off the data. First, I limited myself to fiction titles only. They’re easier to track information about. I did not include graphic novels nor short story collections — this disqualified only 5 titles from my list. Likewise, I ensured all titles were marketed for young adults, age 12 and older. I verified all information through Edelweiss, and in the small number of titles unavailable to find on Edelweiss, I relied on Amazon and/or trade journal reviews. All genre categorizations are based on my own knowledge/reading of a title, or they’re based upon the most common terms in Edelweiss. I collapsed many genres together for simplicity. This is the most subjective portion of the breakdown, and it is further explained beneath that data set.
There are a total of 89 titles and 90 authors being considered in the data. 
All starred ratings come from six sources: The Horn Book, Publishers Weekly, School Library Journal, Library Journal, Kirkus Reviews, and the Bulletin of the Center for Children’s Books (BCCB). I verified starred ratings through Youth Services Corner’s very current and accurate list. BCCB does not produce their “best of” list until January 1, so it will be interesting to see where their picks line up. Worth noting, too, is that Kirkus’s “best of” list this year included 100 teen titles, up from 42 last year. At the time of posting, there is yet a “best of” list from Booklist.

Fair warning: this post is long, but it is graphic-filled. Because I think these “best” lists are a nice slice of a year in the book world, looking at them numerically is fascinating — but note that nothing here is conclusive or proof of anything. These are all my thoughts and musings on the data. Also worth noting is this is my math and while I am confident in my statistical skills, I’m also human. There is a chance there are errors, and I accept responsibility for that. I’m hopeful there are not though. 

The first thing I wanted to look at was whether books that published in the first half of the year — January through June — were represented on “best of” lists with more or less frequency than those that published in the second half of the year — July through December. This is of interest just in terms of access to titles, as well as the lasting impact of titles. If a book was good in January, is it still good compared to everything else published in the year? Or do books that were published in December get overlooked inadvertently? Last year, there was a slight preference toward books published in the second half of the year. What about this year?

There isn’t a huge difference in release dates and appearance on the “best of” lists, though this year’s numbers show a preference for titles published in the first half of the year. There were a total of 48 titles on these lists published January through July and 41 published between July and December. 

For kicks, I broke it down even further. Here’s when books on the “best of” lists were published this year:

Though they’re fairly evenly distributed, books published in the summer and in the late fall/early winter saw fewer titles on the “best of” lists. September had the most books published that ended up on “best of” lists.

I looked at debut novelists yesterday in the post at The Hub, but I thought it would be interesting to see whether or not there was a better month to be a debut novelist. So, here are when the 18 debut novels that ended up on “best of” lists were published:



March and August had the highest showing of debut novel publications that then went onto “best of” lists. There were no debut novels published in May or November that went on to “best” lists, despite May being a bigger month for non-debut novels which ended up on a list. It’s pretty even in terms of first and second half of the year publication dates and appearance on a list.

I’m not done with debut novel analytics yet, though. I noted in my post yesterday a couple of important facts: first, the Kirkus “best of” list contained 100 titles (which were then judiciously weeded by me for the purposes of data gathering), which was a significant number. Second, and maybe more interesting to me, was the fact the editor of Kirkus’s “best of” list is a member of this year’s Morris Awards committee. All of the Morris finalists are on that list, as well as a number of other debut novels. I was curious if, seeing the length of the list and knowing some of the editor’s own reading over the year, there would be more debut novelists on one awards list, as opposed to others. 

Roughly 20% of all “best of” titles are debut novels.

For this data, I counted the number of “best of” titles from each list, then I counted up the number of those titles which were debuts. Enter some division, and I came up with the percentages of each list were made up of debut novels:

The blue bar is the total number of selected titles, with the yellow bar being the debut novels selected. I tried to make this graph interactive, but that didn’t work well with Blogger, so apologies! 

The raw numbers are as such: School Library Journal selected 4 debuts out of a total of 20 titles (20%); Kirkus selected 16 out of a possible 82 titles (19.5%); Library Journal selected 1 out of a possible 8 titles (12.5%); Publishers Weekly selected 1 out of 11 titles (9%); and finally, Horn Book did not pick any debuts for their “best of” list (0%). This was surprising — I expected the highest percentage to come from Kirkus but it did not. It was neat how Kirkus and School Library Journal, though, selected an almost identical percentage of debut novels for their “best of” lists as there were debut novels in all of the lists together. 

What was the distribution of books that were part of a series and those that were stand alone titles? Were there more series or stand alones?:



Personally, I’m thrilled to see so much stand alone love. There were a total of 53 stand alone titles and 36 series titles. In determining what was and was not a stand alone, I did not include Code Name Verity with the series category, despite there being a companion in the works — the book was originally a stand alone title. Books like The Drowned Cities, though, were includes in the series category. That choice was because it is labeled as “Shipbreaker #2.”

Which leads naturally to the next data set, which is where series books fell within a series. Were first in a series or last in a series more likely to make a “best of” list? Or were middle books the real winners here?

It wasn’t entirely surprising to see most of the “best of” series titles were either the start of a series or the conclusion to one. There were 6 middle titles — which I defined as anything between the first and last, regardless of the number of books in the series. Within the last in series category, I did include sequels when a series only included two books (like Such Wicked Intent and Girl of Nightmares, neither of which I could find definitive information about a future installment). There was one prequel to a series, and I marked The Drowned Cities as a companion title, rather than as a straight sequel or final in a series.

I’m always curious if earning starred reviews means that books have any more chance of appearing on a “best of” list. In other words, if a book earned 6 starred reviews, is it more likely to show up on multiple “best” lists? I don’t know if there is any connection or not, though there does tend to be a likelihood that titles on a “best of” list will have earned stars from that publication (many of the “best” titles on Kirkus’s list earned starred reviews from Kirkus — and in many cases, those titles only earned stars from Kirkus).

Before that, let’s look at the distribution of starred titles. In other words, how many of the 89 books earned 6 stars vs. no stars at all:



The bulk of books earned only one starred review, followed by books earning either two or three stars. There was a good chunk of books that didn’t earn any starred reviews at all. 

Does this translate, then, to frequency of a book’s appearance on a “best of” list? In other words, do books with more stars show up more often? Maybe. 



The chart should be fairly self-explanatory, but just in case: the bottom labels indicate how many books earned that number of starred reviews (so there were six books that each earned 6 starred reviews). Stacked above are then the number of lists those books appeared on. So of the books with 6 starred reviews, only 1 fell on all 5 “best of” lists — that’s Elizabeth Wein’s Code Name Verity. There were then 2 books with six starred reviews that fell on four “best of” lists and 3 books with six starred reviews each that fell on three “best of” lists. 

The heavy majority of books earned placement on one “best” list, and they all happened to be one starred titles (and also Kirkus picks — though not always). Kirkus was the only journal to put all of the books without a single starred review on their “best” list. 

Here’s the raw data on that chart (you can blow this up to see it better):

Before delving into a couple other very data-heavy topics, I wanted to look at an easier one to graph (but one that’s so interesting to me, nonetheless). That’s publication type. In other words, are books that come out in hardcover more likely to appear on a list than books that come out as paperback originals? And what about split runs? Split runs have started becoming a thing at Simon and Schuster specifically, as Hannah Moskowitz talks about here. If you don’t click over, a split run is when a book comes out both in paperback and hardcover at the same time.



Here’s where visual data isn’t always the best. There were 84 books on the “best” lists published in hardcover. There were three paperback originals — A Breath of Eyre, Street Dreams, and The Assassin’s Curse. All three earned one star from Kirkus and all three only appeared on Kirkus’s “best” list. There were two split run titles — Aristotle and Dante Discover the Secrets of the Universe and The Chaos. The first appeared on two “best of” lists and earned two starred reviews, while the second appeared on one “best of” list and earned three starred reviews.

Worth noting, both titles are from Simon and Schuster. Also of interest is that the first title features an LBGTQ storyline and the second features a POC as the main character. Simon and Schuster’s split run on Hannah Moskowitz’s Gone Gone Gone comes from an LGBTQ story line and the other title of theirs I know has a split run is Mindi Scott’s Live Through This, a solid contemporary title. If there are others, please let me know. There is something interesting in their choices for what will be split, what goes straight hardcover and what is straight to paperback.

This data shows that hardcovers are, by far, the most frequent types of books to appear on “best of” lists.

I blogged earlier this year about how there are far more publishers out there than just the Big 6 (well, the soon to be Big 5). So I thought I’d take a look at the individual publishers represented on “best” lists, and I’ll follow it up with a breakdown of the publishers represented by their being either a Big 6 or a non-Big 6. I’ve collapsed all imprints within their bigger house.

I put the Big 6 up first, and it’s clear they take up much of the list space, but there are plenty of mid- to small- publishers represented, too. Here’s an actual breakdown of the Big 6 against every other publisher on the lists:


I think it is neat that non-Big 6 publishers are taking up more than 1/3 of the lists, actually. I love, too, how Candlewick has four books represented — for what it’s worth, I think Candlewick is consistently putting out some of the best stuff. I won’t list those titles here because they will all be searchable in the spreadsheet linked at the end of the post.

There’s not a pretty graph for this next data set, which is something I was simply curious about. I’ll include the spreadsheet screen cap, though. I was interested in seeing what information I could find on print runs of titles that appeared on “best of” lists. This information, I should warn, isn’t always accurate or true (for a number of reasons) but I was able to track down quite a few print runs on titles appearing on the list. I then looked at the genre of the books those runs were associated with, as well as the gender of the author. I wanted to know if there was anything between size of print run, gender, genre, and appearances on “best of” lists and earned starred reviews. I think the data sample is too small to make correlations, and the accuracy is speculative, but it’s interesting nonetheless. I’m not going to interpret this information.

Because of space issues and screen capping, I could not get the column labels in with all 26 titles I was able to look at this information for. So, the columns, left to right, are PRINT RUN, LIST APPEARANCES, STARS EARNED, GENDER OF AUTHOR, and GENRE (as defined in my post at The Hub yesterday).

You’re not misreading this. There were books with 500,000 first printings and 200,000 first printings. If you look at my spreadsheet, you can see what they were. But just looking at the additional information in this image should allow you to ferret it out pretty well.

Are you still with me here? This post is never ending because there are a million ways to look at data. But this is the last big thing I wanted to look at, and it’s one in which I admit up front is subjective, that will be riddled with arguments, and in which I show my own ignorance because of my reading this year.

That is representation of POC. I looked at the books that feature POC either as main characters or supporting characters, as well as books written by authors who were of color. I haven’t read all of these, and I am not intimately familiar with all of these authors. I asked for help in some of these. So what I am about to say in terms of numbers is possibly understating it. I do not think I am overstating it, though. But to be fair, I was loose in applying “supporting” characters. Basically, if a book described a supporting character’s race or ethnicity with some detail, it was fair game. Again, I collapsed book characters in with authors, so this number is the combination of the two; I did not double dip and count instances where the author was of color AND their character was, too.

Of 89 books, with 90 authors, I found instances of POC in 22 books/authors. Let’s take the bigger number of authors (90) and do a little math (22 total books/authors): 24%. Almost a quarter of the books on this list. I won’t say whether that’s great or whether it’s not great because I speak from a place of privilege as a white woman. But I am thrilled to see these books getting recognition because these authors and characters? They represent the teens I work with.

One more thing about this particular stat I wanted to note. But before I do that, go read this post on YALSA’s The Hub about whitewashing of book covers and then follow it up with this thoughtful response from Diana Peterfreund.

I’d love to do a breakdown of covers on these award lists and see what is and is not trending. But I simply can’t after looking at all of these numbers. So I did the next best thing. I created two Pinterest boards with just the covers. You can look at them here and here. Looking through those quickly, I found a total of 14 covers featuring a POC pretty obviously (I include Vaunda Nelson’s No Crystal Stair in that count, if you’re wondering). That’s roughly 16% of the total covers.

Again, all of the data above comes from my breaking down of “best of” lists, especially with the context this represents a year of published YA books. You better believe I’ll be revisiting this list when the Printz awards are announced. If you want to see my raw data — and I warn you it is messy and at times, inconsistent in how it’s spelled out, though it is very thorough — you can look at my spread sheets here. I do hope someone goes through those covers I shared on Pinterest and does a post on them. Some suggested interesting things to look at: body parts on covers (eyes and hands especially), girls and guys on covers, and original art vs the use of stock images.

I think there are some interesting title trends worth noting, too, but I’ll be brief because this post has a lot of information in it and it’s getting excessively long.

Three Word Titles: There are 25, if you consider Catch & Release a three word title. Here’s the list, if you’re curious.

  • Code Name Verity 
  • Ask the Passengers
  • No Crystal Stair
  • The Raven Boys
  • The Drowned Cities
  • Keeping the Castle
  • A Certain October
  • The Good Braider
  • Never Fall Down
  • Second Chance Summer
  • Girl of Nightmares
  • Call the Shots
  • The Crimson Crown
  • The Assassin’s Curse
  • After the Snow
  • Don’t Turn Around
  • A World Away
  • The Obsidian Blade
  • Throne of Glass
  • Be My Guest
  • The Broken Lands
  • Such Wicked Intent
  • A Million Suns
  • A Troublesome Boy
  • Catch & Release

Titles That Sound Like Band Names (*And One Is): There is The List, The Disenchantments (which is the name of the band in the book), The Chaos and The Diviners.

Single Word Titles: There are 15 this year. Here’s a list!

  • Double
  • Passenger
  • Boy21
  • Pandemonium
  • Shadowfell
  • Stormdancer
  • Enchanted
  • Pinned
  • Seraphina
  • Vessel
  • Reached
  • Above
  • Cinder
  • Son
  • Bitterblue
  • Dodger

Easily Confused Titles: Let’s meditate on Between You & Me and The Difference Between You and Me for a second.

Negative Connotations: There are a ton of titles that give a negative connotation this year. I’m being very liberal in use, simply because I’ve been reading and rereading the same 89 titles for weeks now. But take a look at The FAULT in Our Stars, The MISEDUCATION of Cameron Post, In DARKNESS, Such WICKED Intent, Love and Other PERISHABLE Items, and so forth.

Gendered Titles: It’s interesting when you see gendered terms in titles and when you read them together the impression you get. So, for the female side, there’s Dust Girl, The Girl with Borrowed Wings, The Girl is Trouble, Mister Death’s Blue-Eyed Girls, Girl in the Clockwork Collar, Girl of Nightmares, Me & Earl & The Dying Girl, and The Brides of Rollrock Island. For the male side, there’s The Troublesome Boy, Sons of 613, Boy21, Confusion of Princes, Son, and The Raven Boys.

Royal and Divinely Inspired Titles: There are quite a few of them (and how many covers feature castles on them!)

  • The Diviners (subjectively, of course — objectively, not so much)
  • Keeping the Castle
  • Devine Intervention
  • The Crimson Crown
  • Throne of Glass
  • Confusion of Princes

Day and Night: There are seven titles that talk about moon, sun, day, stars, and darkness.

  • Beneath a Meth Moon
  • A Million Suns
  • In Darkness
  • Radiant Days
  • Have a Nice Day
  • Every Day
  • The Fault in Our Stars

How’s the Weather?: There are a couple mentions of weather, too. 
  • After the Snow
  • Stormdancer

A Matter of Location: We know where these stories take place.

  • Under the Never Sky
  • Above
  • Between You & Me
  • A World Away
  • Beneath a Meth Moon
Seasons, Months, and Colors: Note that the most popular season is summer and most popular color is blue. 
  • Passion Blue
  • Mister Death’s Blue-Eyed Girls
  • The Crimson Crown
  • Black Heart
  • Bitterblue
  • Second Chance Summer
  • Summer of the Mariposas
  • A Certain October

Objects, not People: Okay, I lied. I looked at covers and noted, too, that 17 of the covers did not sport a single person on them.

If you’re as thoroughly exhausted as me, kudos. This is a lot of information. It’s a lot of information without a lot of context, too, which is why it’s tough to read through and digest. It’s interesting, nonetheless. Does it mean much? Maybe or maybe not.

I suspect people could look at the spreadsheet and find a million more ways to interpret the data. It could be interesting to look at gender and starred reviews, for example. I so wanted to look at gender of character, but it was simply too tough to do — I ultimately chose to delete that column from my list because, not having read all of the books, it was too difficult and I couldn’t easily decide whether to count multiple perspective stories individually or collectively, etc. Maybe someone else can look at this. It’s been done before (if you haven’t read this post before, do it).

All that said, there are books I am absolutely shocked saw no “best” representation. I won’t name them for many reasons, but I’m wondering if other people noticed some obviously missing titles. If you have, feel free to drop the title in the comments. I’m curious, of course, if any of those titles might see their time on YALSA lists at the start of next year.

Any thoughts? Any surprises in the data? Lay it on me!

* For some reason the links are not working on my preview page. They may when this post goes live, but in the event they do not, here is yesterday’s post at The Hub. From there, you can see the links to last year’s post, as well as each of the review journal “best of” lists.

Thank you so much to Liz Burns, Sarah Thompson, and everyone else who helped me out in looking at and calculating the data, as well as suggesting things worth looking at. Any mistakes here are mine and mine alone.

Filed Under: best of list, Data & Stats, Uncategorized

Let’s talk about stats, baby

March 21, 2012 |

I have been sitting on this topic for a long, long time, and after seeing it become an issue earlier today on Twitter, I thought there was no better time than now!

Stats: we’ve all got ’em. They tell us all kinds of useful things, like how many people subscribe to our blogs, how many hits our blog gets, how many page views we have, where our viewers are reading from, and so on. They’re like circulation numbers in print media: stats give a good idea on how much and how many things are going on on a blog. In addition to stats as useful numbers, there’s also commenting numbers that can provide some interesting information.

These numbers can be passed along to publishers in exchange for, say, receiving advanced copies of titles — those numbers can show your reach and your ability to spread the word about a title — and they can be used if you’re seeking out advertising and revenue for your blog. Stats are important because they help separate out and help file bloggers into different categories. Bloggers who get a lot of comments and a lot of hits appear to have bigger reaches in the blogging world, and they then are more entitled to receiving certain “high interest” galleys and receiving some of the perks and promotional opportunities that can come with working with publishers or publicity agencies. These are the bloggers making bigger impressions, and they’re the ones who’ll give the most exposure to the most people. It makes sense. Numbers can say a lot!

When bloggers approach publishers seeking ARCs, sometimes they will lay out their stats for the publisher, and sometimes they publisher will ask directly. There are some publishers on Netgalley who have in their requirements that bloggers wishing to receiving an egalley put their information right in their bios, so that the publisher can make an easier decision on whether the blogger’s numbers match what their ideal numbers are. If a blogger meets that number, they have a better chance at receiving one of the limited number of ARCs available (it’s not guaranteed, but it’s a plus point to them).

Bloggers work exceptionally hard to make sure they’re getting good numbers — they blog regularly, write features that garner traffic, spread the word about their posts in any social media outlet possible, teach themselves search engine optimization to ensure their posts are among the first results popping up when people Google a book. They check their stats daily, weekly, monthly, and they note trends they’re seeing and work to ensure it’s an upward, not downward, movement.

Honestly, it’s at times mind blowing to see how much work bloggers put into their blog — they’re impassioned, they’re loyal, they’re dedicated, and they’re always looking for the next opportunity. Those who work hard SEE the rewards, through not only stats, but also through commenting, through their posts being spread wide and far, through being asked to take part in a huge promotional push on a big title (which then helps their blog’s exposure, the stats, and so on and so forth). But my question is this, and it will continue to be this: what does it even mean? What value does it have? Is there a value at all?

There aren’t answers and there never will be. That right there is why stats, in my mind, are not at all a useful means of measuring a blogger’s worth.

Here’s a screen shot of our stats from the last month (February 20 – March 20), as provided by blogger. As you can see, we’ve had almost 500 hits today, and we’ve had over 22,000 hits in the last month. It’s pretty astounding, considering these numbers do not take into account our readers who subscribe via RSS (I’ll get there in a second). This is only people who go to stackedbooks.org.

And here’s a comparative screen grab of what Google Analytics says our stats are in the same time frame. We’ve had somewhere between 4,100 and 6,800 visitors, and we’ve had 11,300 page views. As you can see, our traffic patterns vary, depending on the day and depending on the content. I can tell you that the peaks are when we have guest posts and when we have posts that elicit conversation, and our valleys are when we post book reviews (the bread and butter of what we do garners the least amount of traffic – go figure!). We also know we get more hits on weekdays as opposed to weekends, and during holidays and during conference seasons (ALA, etc.) we have declines in our readership. The traffic pattern information is useful to us when we’re planning our posts, so that we don’t post something we want people to read when we know our readership will be lower.

 

One more statistical compilation to look at — this is what sitemeter (the little button at the very bottom of our blog and many other blogs) says about our blog. We have far fewer hits per day and week according to this site than we do account to either Blogger or Analytics’s numbers. It also has our overall page views much lower than the other two.

Now those numbers all show the information for how many people are going to our blog directly and interacting with it at stackedbooksblog’s worth on these numbers at all. As both Kimberly and Jen can attest to, this is probably the first time they’ve actually SEEN all of these numbers in one place. Same here. We have them but we never pay attention. We pay attention to writing strong reviews, interesting features, and doing so on a consistent schedule.

So, when we’re asked for our stats, we average out the numbers and get a good idea of what our page views are.

In theory.

We have never once been asked to provide our stats for anything. Never. Once.

I mentioned earlier that these numbers do not take into account readers who subscribe by RSS. But that’s a number that’s always changing and inconsistent, much like the stats listed above. We can, however, get a bit of an idea thanks to Feedburner and thanks to the stats feature in Google Reader (which only gives information about GoogleReader subscribers).

Here’s our Feedburner readout:

Here’s what Google Reader says for our feed:

These two numbers are reading our feeds by different addresses but I know that FeedReader shows our GoogleReader subscribers as much higher than GoogleReader shows our GoogleReader subscribers. But these are two wildly different numbers! And then there’s the complication of numbers of people who are “following,” rather than “subscribing” to our blog.

I’ve talked before about readership and about critical reviews and about different types of bloggers, and that conversation is worth thinking about when we look at stats.  Different bloggers are going to garner different readerships and different stats. They reach different audiences and have different goals. I’ve believed for a long time this is something people were aware of, but I know the case is that that’s not true. There are bloggers who have astronomical stats because they’re promoting titles and they’re working as publicity for titles, rather than as reviewers for titles. Then there are bloggers who only review popular titles. Then there are bloggers who seek out lesser-known titles or bloggers who work primarily backlist titles. Their stats are going to be much different than those who are, say, doing cover reveals (and racking up hits that way) or those who are the first to review a very popular title (say Bitterblue). And that is okay. It is okay. Everyone reaches a different audience and everyone has different goals, and the entire beauty of the blogging world is that everyone can coexist like this.

One of the things we know about our readership is that the bulk are librarians or educators. It’s not our entire audience by any means, but a good chunk are. These are people who are gatekeepers to other readers. They spread information by word of mouth and, often, by opening their budget, too. We have readers who tell us they purchase books because we’ve given it a positive (or critical!) review. We know we have readers who look to us to find out what book they can next hand to a teen who loved x-titled book and needs something similar.

And that — that right there — is exactly why we do this.

We don’t do it for the stats, and we don’t do it to see our numbers explode. We don’t do it so we can get the next greatest promotion nor the next biggest title. We can get them from the library or purchase them ourselves when they’re available. Sure, being the first to review an exciting title is neat, but it’s never our goal here. That’s not to say the folks who do do those things are wrong. It’s just that their goals are much different than ours. And that. is. okay.

So why the long and detailed discussion of stats?

Stats tell us NOTHING.

They tell us absolutely NOTHING about a blog.

The truth of the matter is that while blogs certainly have a role in buzz marketing and in helping sell books and in putting books on people’s radars, we are only hitting certain audiences. Each blog hits different audiences and different readers, and those readers do different things with that information. They pass it along to colleagues or teens, they use it to buy books or avoid buying books, they use it to keep up-to-date on what’s coming out. But do we, as bloggers, know what they’re doing?

The answer is no. We don’t. We have ideas, and we can be told, but the truth is, unless we’re the ones buying a title, we don’t know how many titles we’re selling of certain books. We don’t know our true REACH. We never can and we probably never will.

All these stats do is give us a number. They give us something to look at and to pass along, something that can feel good or feel bad, depending on the day the blogger looks at it. But the truth is, these stats don’t tell us about content or quality of content. It just tells us something was looked at a lot or not looked at at all. It tells us when things are looked at more and when they’re looked at less. They’re a tool for the blogger to plan and think through what they’re doing. And if you take our numbers at their value, our biggest days come when we aren’t reviewing books, which is what we like doing most here. Which is what publishers provide ARCs for — the review. Our stats aren’t useful except to ourselves and whatever meaning we ascribe to it; they’re not useful for publishers because for them, it’s a raw number without meaning behind it.

Stats, as interesting as they are, really don’t tell us anything. They don’t tell us the true impact of what we’re doing. They don’t tell us whether what we said made someone buy a book. They don’t tell us how many people added a book to their GoodReads to-read shelf (sure you could extrapolate, but that’s giving yourself a lot of credit). They don’t tell us anything about ourselves except that we exist and, in some cases, we should be paid attention to. Because we ARE reaching someone. Just . . . we can’t know more than that.

Back to an earlier point: we have never been asked to provide our stats for anything, and I’ve laid them out right here for you to look at because as much as people are protective of their own, they’re also perversely interested in other people’s numbers. Publishers often talk about bloggers providing stats but they’ve not — as far as I know — given any indication of what good stats are. They haven’t laid out publicly what they’re looking for in terms of numbers or reach. At Kid Lit Con in 2010, there was a discussion about this very topic, and the response from the publishers was that they look at quality of work, they look at stats, and they look at comments. To which savvy bloggers cried precisely what I have said — numbers. mean. nothing. Reviews get the lowest views. Reviews get the fewest comments. But it doesn’t make the work any less valued or valuable or worthwhile.

There’s a lot of interest in comparing one another in the blogging world (and in the greater book world, too). But the truth is, comparing yourself to anyone else is pointless. Looking at your stats and seeing they’re better than or worse than ours says absolutely nothing about the quality of what you’re doing nor does it say anything about what your readers are taking away from your work.

Filed Under: big issues, Data & Stats, Professional Development, Uncategorized

  • « Previous Page
  • 1
  • 2
  • 3
  • Facebook
  • Instagram
  • Pinterest
  • Twitter

Search

Archives

We dig the CYBILS

STACKED has participated in the annual CYBILS awards since 2009. Click the image to learn more.

© Copyright 2015 STACKED · All Rights Reserved · Site Designed by Designer Blogs