Monopoly wasn’t a goal for Wikipedia, it’s something that just happened.
There’s basically no way at this stage for someone to be a better Wikipedia than Wikipedia. Anyone else wanting to do a wiki of educational information has to either (a) vary from Wikipedia in coverage (e.g., be strongly specialised — a good Wikia does this superlatively) (b) vary from Wikipedia in rules (e.g., not neutral, or allow original research, like WikInfo) and/or (c) have a small bunch of people who want to do a general neutral encyclopedia that isn’t Wikipedia and who will happily persist because they want to (e.g., Knowino, Citizendium).
Competition would be good, and monopoly as the encyclopedia is not intrinsically a good thing. It’s actually quite a bad thing. It’s mostly a headache for us. Wikipedia wasn’t started with the aim of running a hugely popular website, whose popularity has gone beyond merely “famous”, beyond merely “mainstream”, to being part of the assumed background. We’re an institution now — part of the plumbing. This has made every day for the last eight years a very special “wtf” moment technically. It means we can’t run an encyclopedia out of Jimbo’s spare change any more and need to run fundraisers, to remind the world that this institution is actually a rather small-to-medium-sized charity.
(I think reaching this state was predictable. I said in 2005 that in ten years, the only encyclopedia would be Wikipedia or something directly derived from Wikipedia. I think this is the case, and I don’t think it’s necessarily a good thing.)
The next question is what to do about this. Deliberately crippling Wikipedia would be silly, of course. The only way Wikipedia will get itself any sort of viable competitor is by allowing itself to be blindsided. Fortunately, a proper blindsiding requires something that addresses structural defects of Wikipedia in such a way that others can use them.
(One idea that was mooted on the Citizendium forums: a general, neutral encyclopedia that is heavy on the data, using Semantic MediaWiki or similar. Some of the dreams of Wikidata would cover this — “infoboxes on steroids” at a minimum. Have we made any progress on a coherent wishlist for Wikidata?)
But encouraging the propagation of proper free content licences — which is somewhat more restrictive than what our most excellent friends at Creative Commons do, though they’re an ideal organisation to work with on it — directly helps our mission, for example. The big win would be to make proper free content licenses — preferably public domain, CC-by or CC-by-sa, as they’re the most common — the normal way to distribute educational and academic materials. Because that would fulfill the Foundation mission statement:
“Imagine a world in which every single human being can freely share in the sum of all knowledge. That’s our commitment.”
— without us having to do every bit of it. And really, that mission statement cannot be attained unless we make free content normal and expected, and everyone else joins in.
We need to encourage everyone else to take on the goal of our mission with their own educational, scientific and academic materials. We can’t change the world all on our own.
So. How would you compete with Wikipedia? Answers should account for the failings of previous attempts. Proposals involving new technical functionality should include a link to the code, not a suggestion that someone else should write it.
A direct clone with advertising hasn’t been tried, and in the current atmosphere it would be unlikely to thrive because it could not attract enough eager editors willing to work on a fork.
A few years ago I tried to work out how a peering arrangement for parallel Wikipedias could work. Peer sites would in effect have proxy accounts, and edits would appear on selected pages. As a practical example, say MIT as part of its teaching syllabus gave credits for collaborative editing on topics in programming languages. They create a closed mediawiki instance and arrange peering of the relevant articles with English Wikipedia. Peering is performed by an exchange of revisions, after which each wiki is returned to its “native” state (ie, the latest revision is a copy of the local state of the article before the peered revisions were introduced). A note is entered on the talk page explaining what was done and delivering any information needed for licence compliance.
(continued in my next post)
“A direct clone with advertising hasn’t been tried”
Didn’t Veropedia sort of try that? (What ended up happening to Veropedia?)
(This java-based phone has a tiny edit buffer, hence the split posting. I do my best thinking at the pub.)
So, somebody at the MIT wiki looks at their local copy of Wikipedia’s article on the programming language Python. Having been inducted into Wikipedia’s basic ethics by MIT she knows how to improve the article with her knowledge. She expands several sections by covering, say, the use of Python in research environments.
A few days later a peering bot grabs all her edits and edits the Wikipedia article to include all of her edits, with appropriate attribution, them restores the Wikipedia article to its previous state and makes a note on the talk page. Meanwhile there were some edits on the Wikipedia version of the article, and there are entered into the MIT version of the article.
At this point all licence holders have equal access to one another’s work, and may adopt it in toto, merge in what they like, or ignore it. Over time the different copies of the article may diverge greatly.
(to be continued)
(Continued)
The peering I discuss here doesn’t do anything that couldn’t be done manually, it just uses an adaptation of the mediawiki history feature to make different communities aware of one another’s work. And makes it a bit easier to merge in new content.
So, I think this would be one way of addressing the problem of single points of failure and encouraging the growth of externally hosted communities that have a reasonably informal peering arrangement.
I’ve deliberately skimmed over a lot of the detail here, but I think it’s clear that a peering arrangement like this would require on code changes at all. You could write a bot to copy revisions like this in a couple of minutes, with most decent bot frameworks, and the rest is just a matter of human interaction–the communities would both have to want to do this and their site content cultures would have to be compatible (not only NPOV etc but also various points of style).
So technically, we could do this today (or indeed, years ago).
(END)
I participated in some of the Veropedia work, so I can speak from experience. It was based on a script that performed various checks to make sure there were, for instance, no detected, unresolved citation issues in the text of an article. If that script verified the article it was then inducted into a bespoke format that enabled internal links to Veropedia pages to be mixed with legacy links to Wikipedia articles.
The end result was a static hyperlinked document with no editing interface, so it wasn’t really a Wikipedia competitor, only a presentation style directed at academic institutions that had concerns about citing a source that might be changed to say “Lame wiki is lame” at any moment.
Well, academic institutions have legitimate concerns about Wikipedia but they probably cannot be addressed by such methods. I suppose it failed to make a splash because it didn’t really solve the problem. Academics, I think, actually like the idea of wikis, but they like to retain reasonable control of the content.
Please excuse the spelling errors. Sometimes “these” appears as “there” and “no” becomes “on”. A 2″ screen and a telephone keypad with predictive text will do that.
The key, I have felt since the earliest days, is to encourage the spread of the culture. Above I’ve discussed a simple technical fix that would encourage site forking and keep up information exchange. In your posting you suggest using free licencing for really useful stuff in the hope that the rest of the world will just join in because they see that we’re doing it right.
But it does worry me that this kind of thing often hasn’t happened, even when it probably should. I was frankly astounded, very early on, to find out that Wikipedia didn’t already have peering arrangements with other wikis, and I’m talking about 2004.
For relatively obscure academic topics where a large amount of knowledge is currently concentrated in small departments, I think the opportunity to peer with Wikipedia could sell quite well.
(to be continued)
David, some days ago you summarized the WikiEN-l recurring discussions about “a new Wikipedia” [actual competitor]. What about writing an essay about that (e.g. starting with that email you wrote)? Or perhaps there’s already one somewhere?
But perhaps things are rosier than I thought. It seems that universities are now encouraging their students and faculty to work on improving relevant parts of Wikipedia.
At this rate, perhaps we’re now close to being “Too Big To Fail”. That’s worrying of course, but also comforting.
I do not think we need general encyclopaedias: we have search engines. The search engines can find the information on specialist sites. The quality is usually higher, they have more related information, and you can choose from multiple sources.
@Nemo – which one?
* “I wonder at competitors that attempt to solve problems of huge success, but don’t address the problem of utter obscurity, i.e. the one they actually have.”
* “Wikipedia is, of course, a miserable failure. How can we duplicate this failure?”
* “I can picture a model in which lots of other people write what turn out to be feeder wikis for Wikipedia. But I can’t see what’s really in it for the volunteers on those wikis.”
“Imagine a world in which every single human being can freely share in the sum of all knowledge.”
Let us start from here. Wikipedia defines “knowledge” in a rather old-fashioned manner: as information stored in some media. How to compete with this? For example, one can adopt the enactivist definition: knowledge=interaction.
An encyclopedia of live human interactions, with information as the context for the interactions, seems the next logical step. Observing this year’s crop of aggregator tools that mesh collaboration and content much better than ever, I am optimistic it will happen rather soon.
I’ve started a relevant project, codenamed ‘infinithree’ (‘?³’). I consider it a complement to Wikipedia, rather than a competitor, though some overlap of coverage (and rivalry among contributors) is likely.
Infinithree starts with the goal of being ‘avowedly inclusionist’ – relaxing the traditional ‘encyclopedic’ standards which have exiled some topics and details from Wikipedia. (Such exiled topics often then become the purview of redundant, cynical SEO content mills.)
Infinithree falls most squarely under your option (b) – varying the rules. I agree wholeheartedly that any potential alternative needs to stake out somewhere new in the possibility space – an almost-but-not-quite Wikipedia can’t grow alongside Wikipedia; the negative allelopathy is too strong.
I believe MediaWiki software itself overconstrains the necessary exploration, so Infinithree will *not* be Yet Another MediaWiki Installation (“?³:NOTYAMWI”). I can’t yet point to the alternative software – it’s in very early development – but infinithree.org and @infinithree exist to discuss possibilities and discover potential collaborators.
“Have we made any progress on a coherent wishlist for Wikidata?”
Here is my wish list
http://strategy.wikimedia.org/wiki/Proposal:A_central_wiki_for_interlanguage_links#First_step_towards_a_Central_database_of_facts
Graeme, you say we don’t need encyclopedias because we can use a search engine to find information on a specialist website. As I recall, that’s how things were before 2001. Do you remember those days too? Do you really think things are no better now?
David, that “Imagine a world…” is the vision statement, not the mission statement, which goes in to more detail.
At the West Coast Wikicon San Francisco 10th anniversary unconference I proposed that instead of competition by a fork or a new encyclopedia, we need to set up an overlapping organization (i.e. a California Chapter) sharing employees with the Foundation at the bottom of the hierarchy (instead of the top like Wikia.) The slides are at http://talknicer.com/wm10ca/wm10ca.pdf
The basic idea is that the Foundation has far too few measurement statistics and no competition for doing things like not pushing out too much javascript to mobile devices, making simple language wikipedias for beginning language learners, nurturing and encouraging administrators, advocating for political change which could reduce Foundation overhead costs, curating low stakes assessment content, doing media upload, or even testing all the submitted fundraising banner ideas, etc. A competing organization and measurements of the performance of both organizations could serve to encourage increased performance.
The idea was well received by Michael Snow (who mentioned that it would relieve the pressure from international organizations which object to the Foundation’s Public Policy project’s US-centric focus, among other things), James Forrester, and everyone else who expressed an opinion so far. I gave a copy of the presentation with some notes later on to Craig Newmark and Clay Shirky, and I hope to write up everyone’s reactions to it soon. (Please note that I’m not asking for a leadership role in such a new organization, beyond what it takes to set it up and put someone with more experience and community support in charge.)
@James – mission, vision, whatever tag ;-) It’s the one-line version of “this is what we do.” Like en:wp “We’re here to write an encyclopedia.” The thing that, unpacked, directly implies all the rest.
I would suggest you avoid joining organisations. I find myself regularly fending off attempts by small volunteer organisations to suck me into them. I suggest just coming up with compelling statements of good ideas that inspire others. Then they can do the work. Volunteer dilettante lightly-herded cat, that’s the sweet spot!