I just wanted to thank you for taking the time to review the Frantz article and for removing the deletion request. I appreciate your assessment and help in keeping the article up. Thanks again!
@Editora89119, be aware that although I've declined the speedy deletion request, that hasn't immunised it against deletion; if the editors who feel it's inappropriate for Wikipedia want to nominate it for the full "week-long discussion" deletion process they're entirely within their rights to do so.
The page in question is Jonathan Frantz, if any talk page watchers are interested. Laser eye surgery is not a topic on which I have enough knowledge to give any particularly informed opinion, but it might be worth asking at Wikipedia talk:WikiProject Medicine—Wikipedia has a lot of editors who are genuine subject matter experts in medicine and might be able to expand it (or conversely, to explain why it's not in fact an appropriate biography for Wikipedia and save you wasting time expanding it). ‑ Iridescent15:20, 16 March 2025 (UTC)[reply]
I would not expect WP:MED to embrace this article. I would expect some of the older hands to think about that time when the Alaskan plastic surgeons all hired a PR firm to spam in articles about them. I'm pretty sure something similar happened with LASIK-related subjects, too.
@Editora89119, some of your responses at Wikipedia:Articles for deletion/Jonathan Frantz sound a bit like someone said "Dear ChatGPT, please give me a 500-word-long answer about why Wikipedia should not delete this". Effective answers sound like "Here's a link to 600-word-long news article entirely about Frantz, and here are links to the widely recognized awards he's won, and here are links to reputable trade rags that profiled him..." Seriously: Just posting the link https://winknews.com/2019/09/06/cataract-patient-first-to-get-latest-lens-implant-technology-in-swfl/, with nothing else, would have been just as good as all the words you wrote around it. (Also, you seem to have posted the same thing twice.)
And you still haven't posted there a clear statement about whether you're being paid to write this article. Paid editing is legal. Undisclosed paid editing, however, is a serious problem. So if that's you, then disclose! And if it's not, then say so. WhatamIdoing (talk) 03:12, 17 March 2025 (UTC)[reply]
@Editora89119, let me second the good advice you're being given above. Regardless of whether they are AI generated, your wall-of-text replies at Wikipedia:Articles for deletion/Jonathan Frantzread as if you've instructed an AI to write an essay defending the article. People understandably get annoyed if they feel their time is being wasted; your best rebuttal to people saying a topic isn't notable isn't a long essay, but a concise "this topic is notable, here's a list of external reliable sources which discuss its impact".
Also, I echo the advice to make it clear if you're being paid to write the article (or if you're otherwise connected to Frantz). Wikipedia allows people with a conflict of interest to edit here provided they declare the conflict of interest so that third parties can assess any potential bias. What we don't allow is an undeclared conflict of interest, which we treat as attempts to deceive and/or to manipulate our content. ‑ Iridescent16:14, 17 March 2025 (UTC)[reply]
Hi I responded to that on the Articles for deletion page. Pasting it in here - I went to his office and had LASIK. He did not perform it though. Another doctor performed the procedure. I started a conversation with him and learned about Dr. Frantz, who I had seen on local news. He told me about Dr Frantz living in Louisiana where I once lived and studied. It was fascinating to me because I am connected to the medical world-- my husband is a cardiologist. I then talked to my husband about LASIK and how it was pioneered, and he helped me research in medical journals. I simply found it a fascinating subject. I have been a bit determined on this because once I start something I like to finish. I love this process and want to start and edit more articles. It's like I've found my calling. For the record, I have never even met Dr. Frantz. I have only seen him on TV and in magazines. Editora89119 (talk) 17:40, 17 March 2025 (UTC)[reply]
Thanks for clarifying. I hope your LASIK procedure had a good outcome.
You are welcome to come hang out with us at Wikipedia talk:WikiProject Medicine. We actually need someone who is willing to put some time in on biographies for healthcare professionals. Wikipedia:WikiProject Medicine/Article alerts usually has a list of articles that need attention. Many of them are about people and businesses. Feel free to help out. Even just reading through some of those pages might give you a better idea of what's expected and helpful in discussions like the AFD for your new article. WhatamIdoing (talk) 01:01, 18 March 2025 (UTC)[reply]
Editora89119, to increase the chance of your seeing it I'll copy across the comment I just made at Wikipedia:Articles for deletion/Jonathan Frantz in light of your being blocked from that discussion:
@Editora89119, I do agree with Star Mississippi's rationale for blocking you from this discussion—as you've been warned both here and at my talkpage, your discussion style on this page is becoming actively disruptive. If there's a comment here to which you feel you really need to reply, then post the proposed comment in your thread at my talkpage and I (or one of the other people watching my talkpage) will copy it across to this page if it's not something that's likely to get you in trouble. ‑ Iridescent 15:55, 18 March 2025 (UTC)
If you want to comment in that discussion, then post the comment you want to make here and if it's appropriate someone will copy it across to the discussion. (Your being blocked from that discussion might seem harsh, but it's primarily to prevent you getting in trouble. If you're going to tell multiple highly experienced Wikipedia editors that they're not understanding Wikipedia policy, then unless you can explain why they're wrong it's just going to irritate people. You presumably don't want to end up blocked from Wikipedia altogether, which would potentially be the outcome if you give the impression that you're unwilling to respect other people's opinions.) ‑ Iridescent15:55, 18 March 2025 (UTC)[reply]
Thank you. Admittedly, I had never done a full article and tend to be hyper-focused and goal oriented to a fault sometimes. The only issue I have is why the double standard when it comes to academics and private sector pioneers? And I reiterate that his own colleague, Marguerite McDonald, has far fewer sources / citations measuring up to Frantz. Thank you for allowing me the space here (and perhaps to vent). Editora89119 (talk) 14:56, 19 March 2025 (UTC)[reply]
If we were going to change that "double standard", it would likely be in the direction of making none of them qualify for (separate) articles. See Wikipedia:An article about yourself isn't necessarily a good thing for one of the reasons why it might be better to err in the direction of excluding individual people.
BTW, if the AFD closes with deletion, you can request a WP:REFUND to your userspace (or you could just copy/paste your own work to a page named something like User:Editora89119/Frantz now, before the deletion happens). That would make it possible for you to keep working on it, but you should not WP:MOVE it back into the mainspace without jumping through all the hoops. WhatamIdoing (talk) 18:50, 19 March 2025 (UTC)[reply]
Wikipedia:Articles for deletion/Jonathan Frantz is a fun read. I do wonder what happens when the undisclosed use of large language models becomes harder to detect. The tone and em dashes in this case are a pretty clear tell today, but I imagine the models or prompts or user behavior will start to mask what's really happening.
It feels like these services really ought to start charging people to use these tools to keep the spam rate down. It's kind of good that it costs 73 cents to send a piece of postal mail and kind of bad that people can send e-mails for basically free. On the other hand, spammers on Twitter weren't really deterred by paying for the verification badge so I dunno. --MZMcBride (talk) 20:26, 27 March 2025 (UTC)[reply]
From the Wikipedia point of view, I'd argue that a kind of Turing test applies. If the LLM-generated material is genuinely indistinguishable from human input, does it even matter who's behind it provided that in the case of articles the material created is neutral, accurate and reliably sourced (something LLMs are currently decidedly unable to do), and in the case of discussions that the arguments are coherent and genuinely based in policy?
Considering some of the waffle written by genuine humans here, it's hard to say the AI stuff is significantly worse. The usual tells of LLM content (slightly pompous tone mixed with colloquialisms, inconsistent approach to grammar, preference for falsifying sources when faced with apparent inconsistencies, unattributed copying…) could describe about 75% of Wikipedia's internal discussions. ‑ Iridescent20:57, 28 March 2025 (UTC)[reply]
To be honest both ChatGPT and copyvio are mistakes for which I can't really blame new editors. The learning curve for Wikipedia may not be any steeper than it was 20 years ago, but the slope goes on much higher with 20 years of accumulated policies, guidelines, and unwritten ways-of-doing-things; it's no surprise that people have difficulty understanding which corners are OK to cut. (Neither you, me, nor MZMcBride would say we're familiar with all the things one is supposed to do, and we're all about as insider as one can get.)
Couple that with the fact that the older generations have now had 25 years to get used to Wikipedia, and thus a significant proportion of new editors are going to come from a generation who grew up in the culture of social media where copy-pasting is seen as routine and uncontroversial.* It's a miracle we have any competent new editors who arent either plagiarising, using AI to "improve" writing, or sockpuppets of people who learned Wikipedia rules back in the days when you could still pick them gradually as you went along without people shouting at you when you got them wrong. ‑ Iridescent06:47, 30 March 2025 (UTC)[reply]
Before anyone says it, yes I know the culture of copying texts has existed since the clay tablet era, but Web 2.0 transformed it from something people did occasionally for something they found particularly significant, to something done daily and routinely.
do you think in the near future, after the AI boffins get around to fixing the hallucinations and crappiness, that wikipedia will have a local article creation LLM where editors essentially fill out a form, link all the sources they've found, press generate and WikAI spits outs a perfectly formulated article, templates, embeds, infoxboxes and all? I started on this site nearly a decade ago to improve my writing, but a combination of creative block with lack of expertise with the technical aspects made me abandon article creation by and large for counter-vandalism and now copyvio patrolling. It's been discussed at length how all the sexy articles have been written and much of article creation going forward will be writing 3-5 paragraph pages on long dead BLPs who happen to meet GNG. Thanks,L3X1◊distænt write◊15:58, 30 March 2025 (UTC)[reply]
They've been working on a hand-coded, multilingual approach to Wikipedia. Imagine if you could code a template – maybe something like "{SUBJECT} {#be:singular|third person|tense} {#wikidata:p106|gender}." – and have it spit out "Isadora Duncan was a dancer" or "Sarah Lamb is a ballerina", or the equivalent in the local language, any time someone put your template into an article. It probably wouldn't produce brilliant prose, but it would give total control to editors, while working at scale. Firstly, there's no room for hallucinations, and secondly, individual editors decide whether or not to use each templated function. WhatamIdoing (talk) 16:14, 30 March 2025 (UTC)[reply]
They've been working on a way to auto-convert Wikidata into text for what feels like longer than Wikidata itself has existed (remember Reasonator?). Way back when, Haitian Wikipedia had the dubious privilege of being the testbed for the software to auto-create a pseudoarticle based on Wikidata if you searched for a topic that didn't have its own article. (I assume Haitian was chosen as it's a language with so few users that it wouldn't hugely annoy millions of people, but which is close enough to French that large numbers of people could tell at a glance if it were spewing gibberish.) On aquickdip of topics on which they don't have articles, it looks like they now just serve up the Wikidata entry rather than trying to parse it into text.
What you're talking about—making each sentence into a collection of templates which can autoconvert between languages—I can't see catching on; writers can grudgingly accept the intrusion of templates into blocks of text when they accept it's necessary for the formatting, but only crazy people actually write in wiki markup. This sounds like the WMF—not for the first time—has made the mistake of thinking that "the way professional programmers think, act, and communicate" is the same as "the way the other 99.99% of the world thinks, acts and communicates". (Aside from anything else, such a system would presumably finally slam the lid on the coffin of their beloved Visual Editor, and I can't see the WMF signing off on that.) ‑ Iridescent20:09, 30 March 2025 (UTC)[reply]
mw:Extension:ArticlePlaceholder is on several wiki. As an occasional contributor to the Haitian Creole Wikipedia, I'm glad that it's there.
Also, m:Wikifunctions wasn't the WMF's idea. It was formulated and promoted by a long-time editor (since ~2003). I'm surprised that the WMF took it on at all, and I'm doubly surprised that they didn't hand it to WMDE, but I think that it will be popular with the smaller wikis. A locally written article would usually be best, but if you're not sure what to write, or you're trying to create a basic set of articles (htwiki is mostly one-sentence boilerplate articles about US census locations; Reasonator would be an improvement), or if you don't want to mess with updates (e.g., new census reports), then I can see editors choosing to drop a pre-written {#basic intro} or a {#list of works} or a {#demographics} template into an article rather than starting completely from scratch. WhatamIdoing (talk) 21:35, 30 March 2025 (UTC)[reply]
I'm not insulting the principle of "some kind of summary is better than nothing" when it comes to the smaller Wikipedias. I completely agree that provided the Wikidata entry is correct, it's better for the reader to at least have some kind of "this should give you some idea of what we're talking about" summary than to be faced with the generic "This page does not exist. You can click here to create the page directly, or you may create a draft and submit it for review." when clicking on a redlink.
What I don't believe is that the WMF devs have the necessary skillset do do true parsing of data into coherent text across multiple languages, when even billion-dollar tech corporations struggle with it. Even something basic like "generate a plain text version of the infobox" is fraught with difficulties given how easy it is to miss nuances when translating. Particularly if the US government carries out its threat to revoke §230 (I suspect Musk will talk Trump out of it as the liability to Twitter would be huge, but the intention is there), I can't imagine the WMF's lawyers would be particularly keen on this. If we're machine-translating between 342 languages, all it takes is a single mistranslation leading to (e.g.) someone getting lynched / a product being boycotted / a health scare / etc, and the reputational damage and financial liability are potentially huge. (Paging Yngvadottir if she's still around, as she can generally make this argument more eloquently than I.) A trivial example off the top of my head of how machine translation between wikis could go seriously wrong would be Ce produit contient des traces de poison—a human editor will (hopefully!) see this as "this product contains a trace quantity of fish" with a typo, but machine-translation will quite happily translate it as "this product is toxic". ‑ Iridescent06:14, 31 March 2025 (UTC)[reply]
This is a trivial nitpick (but hey, this is Wikipedia!), but in French, poison means poison, while the word for us fish is poisson. --Tryptofish (talk) 20:51, 31 March 2025 (UTC)[reply]
Ooooh lovely, this talk page is once more hosting wide-ranging and interesting conversations, and Iridescent is back! Maybe that should be the other way round. ... My hot take is that, first of all, this is another condescension to "small wikis" couched as benevolence. It should be left up to the editing/language community of each Wikipedia how they want to go about expanding their coverage, including whether they want to largely delegate it to bot or human translation, or to auto-generate entries in some other way, what their priorities are for new articles, or even how they would like articles to be formatted. It's not the WMF's, or en.wiki's, concern if their versions of Wikipedia are small, or "unbalanced" in their coverage, use or don't use infoboxes, or have more or fewer pics. As I recently said on the unnameable site, "Either let them have their own Wikipedias, or don't." Secondly, that this is a separate pickle from the WMF's promotion of machine translation, and by "pickle" I mean that it's also going to produce garbage, but in this case through GIGO. Wikidata entries are usually not entirely correct: they are often either duplicates or conflations, they enshrine information drawn directly from Wikipedia (including things that are correct on one version of Wikipedia and incorrect on others; it's soul-destroying to keep trying to fix something like a date of birth, especially since the interface is alien and quite hard, and adding a reference is so hard I've never figured it out—and in any case Wikidata has almost zero vandalism patrolling and no edit summaries, so it will just be switched back). Wikidata doesn't handle well where a topic is an article on one Wikipedia but a section or a redirect on another Wikipedia (and I don't see how it can, especially since it's sometimes a matter of different judgements of importance in different cultural contexts, and sometimes purely accident or expediency, like the fact we cover Eichler Homes as background in Joseph Eichler rather than the other way round). Plus, the Babel issue particularly affects Wikidata: the site is more monolingual even than en.wiki, so both the accuracy and the sorting of foreign-language data are rock-bottom. (Where there are foreign-language descriptions of "items", they're often error-filled; and there used to be a tremendous knot over linking the Midsummer Night's Dream articles to Japanese, since the Japanese translation of the phrase is the title of a film that was at that title on ja.wiki—I hope someone has now sorted out that mess, but I'm sure there are other such messes.) Yngvadottir (talk) 22:54, 31 March 2025 (UTC)[reply]
I don't know how to add page numbers yet, but since most of my refs involve PubMed, it's not been a question I've personally needed to address. That might be of more importance for you than it has been for me. WhatamIdoing (talk) 23:33, 1 April 2025 (UTC)[reply]
Thanks. That will enable me to add books with ISBNs. But I see that I would have to use "import URL" to add a reference to, for example, a news article or obituary. (The case in which I've most wanted to fix this circularity is dates of birth.) I know I can't manage that. (And of course not all books have ISBNs; does it by any chance accept OCLC numbers?) Yngvadottir (talk) 02:05, 2 April 2025 (UTC)[reply]
I don't know what "import URL" does, so I suggest using "reference URL". You can manage it. You just paste the URL into the little box. It seems to take any URL, so you could also use, e.g., a link to an exact page in Google Books.
I'm not sure I agree on another condescension to "small wikis" couched as benevolence. Presumably if the Haitians (or whoever) don't want the auto-generate placeholder 'articles' they could choose to turn them off. My point isn't an issue with the existence of the placeholders; it's that their general unreadability illustrates just how far off MediaWiki developers are from being able to create readable text on the fly.
With the disclaimer that I haven't been following Wikidata particularly closely, my experience of it tends to align more with Yngvadottir than with WAID. The level of inaccuracy which we'd never trust on Wikipedia itself, a toxic and dysfunctional internal culture, a very unfreiendly editing interface, and a lack of clear processes for checking and correction, combine in my opinion to create something we shouldn't be using. (The relevant thought experiment might be, "If Wikidata didn't exist but there was an identical-in-every-way site run by Amazon or Google, would we trust it?". Given the history at RSN of other user-generated data sites like IMDB, the answer would almost certainly be no.) Given that we (rightly) don't trust other language Wikipedias as sources without independently verifying every source for every claim, I don't understand the attitude that we dhould grant some kind of exception for Wikidata.
Yes, I'm aware this is starting to veer some distance from the original point, which if I'm understand it seems to be more about creating coded fragments to make machine translation easier. I still think that's problematic—even something as basic as "Isadora Duncan was a dancer" needs a source, and the various Wikipedias vary in their attitudes both to which sources are reliable and how citations should be formatted. (Also, I don't think it's too WP:BEANS for me to point out that it would probably usher in golden age of crosswiki vandalism.) ‑ Iridescent04:04, 3 April 2025 (UTC)[reply]
Yes, "Isadora Duncan was a dancer" would benefit from a source, and the source could be coded into it.
It's true that at enwiki, we believe we're mostly better off without Wikidata. However, other wikis make the opposite choice, and prefer to rely on it as much as possible.
(I agree with the BEANS concern. One tool Wikidata has for mitigating that is that you can set a bot to prevent or revert individual edits. Imagine if an antivandal bot here could be told to watch a specific infobox parameter in a specific article, and to restore that parameter if anyone removes or changes it.) WhatamIdoing (talk) 20:42, 3 April 2025 (UTC)[reply]
There's no technical reason we can't do that now. The reason we don't use micro-Cluebots set to patrol a particular sentence, paragraph, or paramater and revert any changes made isn't technical, it's social—"Anyone can edit" is so fetishized here. (I can think of very few cases when this would actually apply on the Wikipedias. It works on Wikidata because that's not public-facing in the same way; when it comes to Wikipedia, what information is included is always a value judgement and as such pretty much any piece of information could potentially be removed in good faith from any given article or template.) ‑ Iridescent04:56, 4 April 2025 (UTC)[reply]
I think it works on Wikidata because it's structured data. A bot can reliably tell whether "occupation = dancer" is in the entry. It can't reliably tell whether "was a dancer" being turned into "danced professionally for three decades" means that the words "was a dancer" need to be crammed back into the article. WhatamIdoing (talk) 05:37, 4 April 2025 (UTC)[reply]
Sure, but once we start working cross-wiki—and even more so, with data to be exported off-wiki—that principle breaks down. "Isadora Duncan was a dancer" works because she's only known for one thing. Substitute Glenda Jackson, Lillie Langtry, Arnold Schwarzenegger, etc and the system starts creaking. Whether Tracy Brabin is treated as the Mayor of West Yorkshire, as the actress who played Tricia Armstrong in Coronation Street, or as Jeremy Corbyn's Shadow DCMS Secretary, will be both context-dependent and dependent on the intended audience. (Looking at Brabin's Wikidata entry, her English, French and Spanish descriptions are "British politician and actress", "actrice britannique" and "política británica" respectively. This isn't some hypothetical issue I've made up.) ‑ Iridescent06:33, 5 April 2025 (UTC)[reply]
Sure, and that's why you offer building blocks, and then a responsible human figures out which pieces to use for each subject. Some of the generic building blocks will probably get used a lot (e.g., population according to last census). Some of the building blocks probably won't. Some of the building blocks will even be custom-written for a single subject (e.g., the subjects with multiple unrelated reasons for notability). If you think of 'the principle' less as "let's automate everything" and more as "here's some super fancy templates, and you can pick and choose which ones you want to use", the principle doesn't break down at all. WhatamIdoing (talk) 19:12, 5 April 2025 (UTC)[reply]
I can totally see the theory, but I can't see it working in practice. Our ridiculously outdated mix of wikitext and templates is still the least worst option compared to VisualEditor or pure HTML because it's generally both possible to figure out what all the elements do (even our wretched system for handling tables), and to work around those elements of the markup with which one isn't familiar. A new editor confronted with something like "{SUBJECT} {#be:singular|third person|tense} {#wikidata:p106|gender}" when they try to edit the page can't reasonably be expected to know where to start if they want to make a change.
If the WMF—or independent devs—want to go down this route (and don't get me wrong, I can totally see how it would be useful to create readable placeholder articles in other languages), I'd suggest a more fruitful approach would be some mechanism for automagically extracting the salient points as a dataset which can then be presented as a set of bullet points. Such a thing must be technically possible, given that the browser on my phone can already do this for websites if I tap the "Summarise" button. ‑ Iridescent16:20, 6 April 2025 (UTC)[reply]
It's possible that the project will be overtaken by events in the real world. Who needs a human-curated page at all, if you are willing to trust your phone to give you the right answers? WhatamIdoing (talk) 20:22, 6 April 2025 (UTC)[reply]
I certainly don't trust my phone to give me the right answers, but that's missing the point—I don't trust Wikipedia (or Wikidata, or Commons…) to give me the right answers either. There's a reason we plaster Please be advised that nothing found here has necessarily been reviewed by people with the expertise required to provide you with complete, accurate, or reliable information. That is not to say that you will not find valuable and accurate information in Wikipedia; much of the time you will. However, Wikipedia cannot guarantee the validity of the information found here. across everything we do.
I wouldn't trust the "Summarise this page" button on my phone if I were looking for technical or legal guidance, but I wouldn't trust Wikidata either. I would, however, be willing to believe that "extract the key data from the various language Wikipedia pages on a given topic, present it in a standard format, and flag up those instances where different versions disagree in order that a neutral third party can check" is a task that software can likely already do better than Wikidata's current infinite-number-of-monkeys approach, and will certainly be able to do better in future.
I might be being unfair here, but I get the impression that at the moment the attitude both at the WMF and among the editor communities is "how can we keep automation off the projects". To my mind, the questions ought to be "Which areas can we safely automate?", "How do we maintain effective oversight?", and "How do we keep engaged and redeploy those editors who used to manually perform these tasks?". While I do think the AI bubble is likely to burst soon, it doesn;t mean the technology is going to go away. ‑ Iridescent05:14, 14 April 2025 (UTC)[reply]
I must say I have not got the impression that the WMF is trying to keep automation off the projects, and I would be interested to know where your impression is from. As an example, WMF is currently producing AI generated videos from en.wiki's DYKs. CMD (talk) 06:09, 14 April 2025 (UTC)[reply]
That's not "automation" in the same sense—something like those videos is essentially just an extreme version of amending a template that affects the layout of multiple pages, or using a script to perform a bulk reformatting, and I sure as hell hope that a human is checking every one of those videos before they go live otherwise I can think of some ways to perform some spectacular feats of vandalism that are reasonably likely to slip through. The party line is that any AI input is under human control (Wikipedia editors are in control of all machine generated content − they edit, improve, and audit any work done by AI if you want it in {{lang|WMF}} format).[1]
FWIW I actually agree with the sentiment there—LLM technology is currently nowhere near the point where we should be trusting it, and at least when a human editor consistently proves themselves untrustworthy we can kick them out. But, we're rapidly approaching the point when a decent AI will be no less error-prone or biased than a reasonably competent human. Thankfully neither myself nor WAID have anything to do with Wikipedia's governance any more so this is Someone Else's Problem, but IMO the question people ought to be asking is "which aspects of content creation and curation can we trust AI to perform without human oversight?". ‑ Iridescent18:03, 14 April 2025 (UTC)[reply]
We should be able to trust AI with anything we'd currently trust a bot to do. We can probably (or will soon be able to) trust AI with tasks that we'd usually use WP:AWB for. Repetitive tasks – remember having to change "Queen Elizabeth is" to "Queen Elizabeth was" when she died? – are another possibility.
I think that the more interesting options are human+AI. Imagine a tool that checked whether a cited source actually supported the content, or "WP:BEFORE bot" that could produce a list of probably-reliable real-world sources if you're not sure whether a subject is notable. WhatamIdoing (talk) 19:05, 14 April 2025 (UTC)[reply]
It duplicated discussions which were already taking place at Wikipedia talk:Notability. Notability discussions have always been problematic, since "I find this interesting, therefore it's important" is the closest thing Wikipedia has to Original Sin. (Back before the dawn of time when we were still 'bonus content' on a porn site, these were our first ever articles.) My personal opinion—which will never get any traction—is that GNG passed its sell-by date roughly two decades ago, and "does sufficient material exist in reliable sources to write 500 words on the topic?" would both be more sensible and lead to fewer arguments. ‑ Iridescent01:30, 23 April 2025 (UTC)[reply]
My internal acronym for unofficially judging viability has always been MINTS, for "Multiple Independent Neutral Third-party Sources". I still think 500 words is a more reasonable limit—I'm not asking that people write 500 words, I'm asking if they could. Those 200-word microstubs should IMO be merged into broader topics or list pages if there's genuinely no possibility they could ever be expanded. It isn't a clear-cut test, since all it takes is for a couple of journalists to pick up on a patently non-noteworthy topic as a quirky piece of filler and that makes it 'notable', but nor is the current unhealthy compromise. Under the existing GNG, I could quite easily turn Pterodactyl Turned Me Gay blue. ‑ Iridescent19:31, 25 April 2025 (UTC)[reply]
It amounts to a demand that 500 words be written. We have a handful of editors who send articles to AFD in violation of WP:NEXIST. When you spam a couple of good sources into the article, they're usually satisfied. They believe the GNG requires proof in all articles, and that it's not enough that you could cite the sources. Those same editors will start nominating articles that are too short, and only taking the time to write 500 words will actually prove to them that you could. WhatamIdoing (talk) 19:39, 25 April 2025 (UTC)[reply]
Also, I think your acronym should be MITS. Sources are allowed to be biased, and an article explaining why something is a bad idea, although not "neutral", is an excellent source. WhatamIdoing (talk) 19:40, 25 April 2025 (UTC)[reply]
Neutrality is a bit of a grey area. Biased sources are perfectly acceptable as sources—a sizeable chunk or our arts articles wouldn't exist without sources written by fans or critics—but they're not great from a notability point of view. If neutral sources don't at least exist, then "nobody finds this important except for those people who have a vested interest in finding it important" becomes a valid argument. The whole deletion area is blurry and inconsistent. ‑ Iridescent04:13, 26 April 2025 (UTC)[reply]
"Vested interest" is the independent/third-party part. A "neutral" source is the one that says both sides make good points, but ultimately one can't possibly be expected to choose between, say, education and ignorance, because they both have advantages and disadvantages. WhatamIdoing (talk) 06:00, 26 April 2025 (UTC)[reply]
We mean different things by 'neutral', I think. When it comes to the arts—particularly when it comes to art history—people understandably aren't likely to devote their lives to studying something they consider low quality. The same doesn;t hold through to the same extent in the sciences (either natural sciences or social sciences)—a historian who thinks communism is pure crankery will still be able to give a history of Marx, a virologist can explain miasma theory, but the most eminent gallery curator would be unlikely to offer much of a discussion of Vladimir Tretchikoff. ‑ Iridescent16:54, 24 August 2025 (UTC)[reply]
Break: autogenerated summaries
Iri, it's not about us. If most of readers trust the summary on their phones, Wikipedia will die. I don't mean monetarily (though that is probably already happening, and probably has more to do with TikTok than with AI); I mean that the community will collapse. We need readers, because a teeny tiny fraction of readers become editors, and a small fraction of editors become part of the community. WhatamIdoing (talk) 18:01, 14 April 2025 (UTC)[reply]
Those AI generated summaries may be new, but people have always used their equivalents since the invention of writing let alone the invention of Wikipedia—this is why religions have catechisms rather than expecting everyone to read their holy book cover-to-cover, and why Cliff's Notes sell upwards of a million copies per year. I'm sure you're aware of (I think it may even have been you who told me about) the WMF's pageview analysis that showed just how few readers look at anything other than the lead paragraph and infobox. And that's fine—the vast majority of readers just want to know who played Corporal Newkirk in Hogan's Heroes, they aren't interested in his entire biography. Those people skimming summaries are going to come back later to learn more if they find it interesting, and if they don't find it interesting they wouldn't have read past the lead anyway.
The people who go on to become Wikipedia editors are drawn from the people who read the lead, think "that sounds interesting" and go on to read the entire article, and then either think "that article needs improvement" or "why isn't the article on some other topic of this quality?". A machine-generated article summary and a machine-extracted data summary are just 2025 equivalents of that lead paragraph and infobox respectively. They have the potential to be helpful to some people and shouldn't have any significant impact on either the readers at whom articles are primarily aimed, or on editor recruitment. As you know I'm highly AI-sceptic and even more so Wikidata-sceptic, but I don't see why we wouldn't welcome something that consnstently and repeatedly hammers home the message of "it might seem counter-intuitive but Wikipedia is generally the place to trust as a source of information". ‑ Iridescent18:28, 14 April 2025 (UTC)[reply]
If the WMF could coax Google, Meta et al to credit prominently the sources for those summaries—as Google already does with their data boxes—then yes, it would. I think it's fair to say that 99% of the world don't understand that when it comes to searches, "artifical intelligence" is largely just an automated process for paraphrasing Wikipedia. If people saw "Source: Wikipedia" and "Source: Wikidata" about 75% of the time they used any kind of AI query, it would be a powerful rebuttal to the whole "Wikipedia is irrelevant now that we can ask AI" mentality. ‑ Iridescent01:39, 23 April 2025 (UTC)[reply]
NYB concern about AI
Based on things I've seen recently, I have a completely separate concern about AI and Wikipedia, which I've raised on my talkpage. I'd appreciate any comments there. Thanks, Newyorkbrad (talk) 15:32, 2 April 2025 (UTC)[reply]
As this reply is somewhat tangential, I'll reply here rather than at User talk:Newyorkbrad#Question for discussion, so any back-and-forth doesn't divert your thread down a sidetrack. Feel free to copy it across to your talk if you want all the discussion in one place.
IMO this is one of those rare occasions where you'd actually be better off having the discussion somewhere like Wikipediocracy, rather than on-wiki. For long boring reasons Wikipedia has always had a disproportionate number of "better data processing is the solution to everything" True Believers, and in my experience they tend to shout the loudest. As such discussions held on Wikpedia about how we handle data tend both to get very heated and to give a distorted picture of consensus, as those sceptical of Big Data quite understandably feel they have better things to do than be ranted at. (Your memories are no doubt just as fond as mine of people ranting about how if we didn't follow their preferred formula on infoboxes, categories, date formatting etc it would RENDER WIKIPEDIA UNUSABLE!!!!!!) For a discussion like this, the input of people like Somey, who understand Wikipedia and the broader internet/data/AI ecosystem well enough that their opinions are worth hearing, but who aren't invested in pushing Wikipedia down one path or the other and aren't worried about whether they upset people on-wiki who are strongly pro- or anti-AI, is probably more valuable that the opinions of the people who spend all their spare time on-wiki.
My 2c on the issue you raise regarding AI scraping of talk pages would be that it doesn't really matter. These talk pages are all publicly published so there's no privacy or copyright issue—{{NOINDEX}} is just a courtesy to the rest of the internet to prevent search results getting cluttered with shitty drafts and tedious internal Wikipedia discussions. (The BLP issue I see someone raising is IMO a red herring. Obviously we don't want libel or untruth on talk pages for legal and ethical reasons, but if an AI scrapes something inappropriate and doesn't factcheck it then as far as I'm concerned that's the AI's problem not ours—any AI operator who isn't teaching their AI not to blindly trust Wikipedia isn't doing their job.) I can't see it having any significant impact on AI output—given that LLMs are being trained on 20+ years of social media posts, the distorting effect of any inappropriate talkpage content would be a drop in the ocean. (There's one very specific use case where this would be a significant issue—an AI bot that can mimic a new Wikipedia editor gradually learning, such that one could set a bunch of scripts running and then come back a couple of years later to harvest a crop of fully-grown sock accounts, all well-enough regarded that they could then work in lockstep to overwhelm discussions. I suspect we're still decades away from the point where a LLM could consistently pass the Turing test day-in-day-out in the Wikipedia environment, so I'm not losing sleep over it.) ‑ Iridescent04:40, 3 April 2025 (UTC)[reply]
I sometimes used AI to format citations because doing so by hand is just painful (both mentally and physically). I also thought to see if ChatGPT could propose a rewrite of (parts of) Lake Tauca, which might make a good FAC candidate but would need a serious rewrite. Jo-Jo Eumerus (talk) 08:53, 3 April 2025 (UTC)[reply]
Unless and until ChatGPT can understand how to cite sources, how to select which sources are appropriate to cite, and (most importantly from our point of view) when to recognize a situation where we need to omit something because the only sources that mention it are low-quality, it's not going to be particularly useful on Wikipedia except in limited "rewrite this paragraph to sound less verbose while still meaning the same thing" situations. For something like Lake Tauca, where presumably a significant proportion of the sources are going to be in Spanish, my confidence in AI drops to something approaching zero—even bilingual humans struggle with miltilingual sourcing and weighing not only the relative reliability of sources in different languages but the fact that different languages represent different cultures with potentially different ideas of which elements are significant. (I'll fall back on my go-to example of en:Texas Revolution, es:Independencia de Texas and ca:Guerra de la independència de Texas, all of which are written by highly-regarded editors on their respective wikis and which are two FAs and a FFA respectively, but which are so different in terms of which elements they give significance to they could almost be describing different incidents.)
Strangely in light of certain other unpleasantnesses, the only LLM in current widespread use that seems to grasp the concept that "reliable source", "unreliable source" and "potentially unreliable source" are three different things is Elon Musk's pet Grok system, which for other reasons one would hope are obvious I wouldn't trust within a mile of Wikipedia. ‑ Iridescent16:05, 3 April 2025 (UTC)[reply]
Sorry, I meant to rewrite the text, not the sourcing and stuff too. Keeping text-source integrity would have been my job. FWIW, on this topic most sources are in English, probably because it's a bit too remote from present-day concerns to draw attention from local scientists. Jo-Jo Eumerus (talk) 07:25, 5 April 2025 (UTC)[reply]
If it's being used purely for proofreading and formatting I don't see how it's ethically or technically any different to a spellcheck. The issue is any situation where it's involved either with sourcing or the interpretation of sources—we know from experience that where there's incomplete information ChatGPT et al have a tendency to fabricate to fill in gaps rather than say "I don't know" . (And don't get me started on machine translation. An eyeopening exercise is to take a piece of text, get your machine translation of choice to translate it and then re-translate it back to the original language. I just tried it with the lead paragraph of Lake Tauca and The lake was saline. The lake received water from Lake Titicaca, but whether this contributed most of Tauca's water or only a small amount is controversial; the quantity was sufficient to influence the local climate and depress the underlying terrain with its weight. became The lake water is salty. The lake is fed by Lake Titicaca, but there is controversy as to whether it provides most of Taukei's water or only a small amount. It is large enough to influence the local climate and reduce the weight of the soil beneath the ground., and Google Translate generally works best with this kind of topic where there aren't as many nuances and abstract concepts to grapple with.) ‑ Iridescent18:08, 6 April 2025 (UTC)[reply]
Another thing that AI does well is to find sources discussing specific claims - Perplexity AI in particular - although you can't always trust their interpretation of the source. I've seen suggestions that the present generation of AI tools is suited mainly to save certain boring manual tasks, less so at replacing analysis and more cognitively engaging ones. JoJo Eumerus mobile (main talk) 06:03, 28 April 2025 (UTC) JoJo Eumerus mobile (main talk) 06:03, 28 April 2025 (UTC)[reply]
There was a very good paper by Apple (which I can't find at the moment but somebody will probably dig out the link) justifying their not jumping onto the AI bandwagon. The TL;DR of it is "it's great for simple tasks but anything complicated needs so much checking it will never be labor-saving". ‑ Iridescent16:43, 24 August 2025 (UTC)[reply]
Sounds like "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity" [2]. Perryprog (talk) 16:48, 24 August 2025 (UTC)[reply]
That's exactly the one. My feeling towards AI is that it's a 200-years-later version of railroad mania. There's a genuinely world-changing technology at the core of it, but for every empire built there will be a hundred spectacular failures that wipe out their investors, and even the successes will have spectacular failures along the way. ‑ Iridescent17:01, 24 August 2025 (UTC)[reply]
Granted, I wonder if humans do a much better job at this kind of thinking at all. I mean, we certainly seem to back off faster and say "sorry, but I dunno" but that sounds like something completely different. One thing that I have been wondering about is how well AI works when comparing a piece of article text to its source. Jo-Jo Eumerus (talk) 09:01, 28 August 2025 (UTC)[reply]
(I admit that part of the reason I am posting about this is because I haven't been able to find anyone interested in Misti, which needs some prep work before a second FAC attempt. Unfortunately @SandyGeorgia and Femke: are busy) Jo-Jo Eumerus (talk) 10:56, 28 August 2025 (UTC)[reply]
There's two things here. First, many LLMs are often prompted to come across as helpful, and saying 'no idea' isn't helpful. Furthermore, I doubt it understands that it doesn't know in many occasions. In its essence, an LLM is a statistical word prediction tool. It just happens to be the case that there is a lot of knowledge contained in language, so often it does give you answers that work well enough. Google's search AI slop does say it doesn't know frequently, which is an improvement, but still it hallucinates all the time when it does give an answer to your query, and is much less accurate than the big models. Humans are better at knowing their limits, even though we all know editors on Wikipedia who overestimate their own knowledge with a fierce insistency. I would think LLMs would be quite bad at checking text-source integrity issue, as they fill in gaps in the text with own 'knowledge', not being able to distinguish between paraphrasing and bringing in new pieces of information. When I struggle with paraphrasing some complicated sentence on WIkipedia, I often ask chatGPT for help and can only use like 20% of its suggestions. Glad to see you back, Iridescent!—Femke 🐦 (talk) 11:21, 28 August 2025 (UTC)[reply]
I wonder if that ("ill in gaps in the text with own 'knowledge'") might be a question of how the prompt is worded, rather than a general issue with how LLM/AI works. Then again, human source-text reviews don't scale (I am sure most claims in a FAC aren't checked against their source because it's just too many of them) and their error rate increases sharply with scale (my experience when doing just that) so I guess it'd be a question of lesser unreliability. Jo-Jo Eumerus (talk) 08:08, 29 August 2025 (UTC)[reply]
Im my admittedly limited experience, the big LLMs are very consistent (currently) in the manner in which they fill in gaps with their own hallucinations. For Wikipedia's purposes—and probably for real-life purposes—an AI that prominently highlights low-confidence results would be far more useful than the current output of ChatGPT et al, which justt gives a big glop of output and a generic "this might not be accurate" disclaimer. Ultimately, LLM AI is just an extreme form of autocomplete. (Yes, people also make mistakes, but when a person consistently makes the same mistake we can talk to them and in extremis kick them out.)
AI is supremely useful for specific defined tasks, but I still wouldn't trust it for anything remotely contentious. Even the more sophisticated models that can understand the concept of "there are differing opinions" aren't at the stage—and may never be at the stage—when they can consistently tell the difference between a crank opinion and a fringe-but-still-respectable position. (I suspect this will get worse, not better; the AI models are learning the preferences of their individual users and are already starting to act as bias confirmation mechanisms for individual users.) ‑ Iridescent17:16, 2 September 2025 (UTC)[reply]
Hello
Just dropping by, as I did just now see your reply about the stamp (this is now safely in your talk page archives). I am on one of my periodic visits to see how much has changed. Some familiar names still here. Some sadly departed (as in mortal coil shuffled off). I am trying not to get sucked into the admin activity discussions (but failing). I do see that the edit histories now say when people are doing mobile edits. I am sure that is new, but will doubtless be told that was at least a year ago (or more). I confess I looked for an emoji to add at this point. Oops. My old-timer badge beeped at me. I will check back for any replies in a month! Carcharoth (talk) 18:28, 3 June 2025 (UTC)[reply]
I'm vaguely around, but not particularly active (as you can presumably see). I intend to come back at some point, but when and how (or even if) I can't say. ‑ Iridescent16:41, 24 August 2025 (UTC)[reply]
Thank you today for the 2015 The Combat: Woman Pleading for the Vanquished, "about a picture of a right foot; The Parthenon magazine was a great admirer of said foot, saying it "seemed to glow with the rich juice of life", but The London Magazine disliked the foot and felt it did not have sufficient heroic character. The foot in question is attached to The Combat, a very large painting of highly questionable taste, which in the mid-19th century was considered by some critics as among of the greatest artworks of all time, but which has gradually faded into obscurity."! - I have - sad record - three articles of people who recently died on the same page. Good to see you active here! -- (forgot to sign)
If you're still doing this, I'd echo what Risker says about GorillaWarfare generally being a good one to ask. If you don;t mind holding your nose, pinging some of the less crazy people at Wikipediocracy would probably be worthwhile, as they no doubt took copious notes as it unfolded. I managed to miss pretty much the whole thing both on and off wiki. ‑ Iridescent16:40, 24 August 2025 (UTC)[reply]
Wikipedia is not a command hierarchy
Hello, Iridescent and friendly talk page stalkers. Imagine that I want to add a link to Wikipedia:What Wikipedia is not that communicates the concept that you don't get to tell other volunteers what they have to do, while you complain that they aren't doing what you've ordered. (I mean, we had an RFC that formed a consensus that somebody else would do all of this work. Why didn't they do it already?)
How would you phrase that? "Wikipedia is not a place where you get to vote that others do the work"? Surely there are better options. WhatamIdoing (talk) 20:23, 17 July 2025 (UTC)[reply]
Cryptic, I'm looking for something closer to "Don't complain that other people didn't do what you could do, except you refuse to do it because you don't want to do it yourself". WP:SOLVE advocates for tagging pages when you need help, not when you think other people should do the boring work. WhatamIdoing (talk) 03:37, 18 July 2025 (UTC)[reply]
This is covered in WP:BOLD – Fix it yourself instead of just talking about it. In the time it takes to write about the problem, you could instead improve the encyclopedia. – but I agree that it doesn't really fit there. I assume it's there for historical reasons as the least inappropriate place back when we only had a handful of policies and guidelines. (I personally am not convinced WP:BOLD is really appropriate as any kind of Wikipedia guideline any more. "We welcome anyone who tries to help, whether or not they're actually being helpful" may have described Wikipedia circa 2003; it certainly doesn't describe any of the WMF projects—except maybe the ultraniche ones who are grateful for any participant at all—in 2025.) ‑ Iridescent16:26, 24 August 2025 (UTC)[reply]
I've heard that back in the day, arwiki chased after vandals with some success, on the theory that they desperately needed contributors, and if you have already figured out how to vandalize an article, then hey – you're already halfway to writing decent content!
And who should do the boring work? I sometimes wonder if the Wikipedia back office is populated by too many wannabe chiefs and not enough workers (to PC paraphrase a well known expression). Notwithstanding the people who have disdain for some of the more necessary but arduous and soul-destroying tasks like for example relentlessly where the dedicated people in the trenches of NPP's war of attrition are sometimes told by others they're wasting their time, while others do their best to throw a wrench in the works. Anyway, after having tried everything from ACTRIAL to creating a user right, to a complete rewrite of the Page Curation code, nothing will change for them until the WMF's 'Growth Team' grows to understand what being a Wikipedia editor is and a member of the several forces that clean up and/or delete the trash; and new users understand that Wikipedia is not the place to park a paragraph of junk into and expect others to turn it into a respectable article.
And on that, the WMF needs to be told by a higher authority what is needed, how to appoint the most appropriate CEOs and CTOs, how to recognise and delegate development, and how to balance the books. So looking at the lineup for this year's scramble (did I say 'scramble?) for the two community seats on the BoT election, while the contenders all mean well, apart from a couple it's more like a modern quest for takers for Arbcom. I'm sure though that you will all turn out to vote, so if you do, here's my take on it, and I make no apology for canvassing. Kudpung กุดผึ้ง
I generally ignore the BoT elections. WAID will probably pop up to disagree, but the general impression I get is that the WMF will do whatever the hell they like regardless, and participation in elections just gives the pretext to say "well, this is what you voted for" when they do whatever they were going to do anyway. Besides, I'm uncomfortable with just how unhealthy an immersion in the more cult-like aspects of the project elected arbs are expected to have; I can't begin to think how much worse it is for elected trustees. This really is the sort of job where I'd consider being qualified for the role to be inherently disqualifying. ‑ Iridescent16:37, 24 August 2025 (UTC)[reply]
To clarify, having just re-read what I've typed: by This really is the sort of job where I'd consider being qualified for the role to be inherently disqualifying I'm not saying either that we should abolish the trustees and appoint Jimmy Wales as Supreme Commander for Life, or convert the WMF into some kind of workers commune with direct voting on every decision. I'm saying that anyone who gets into a position where they're a viable candidate to be an elected trustee has done so much internal politicking that they pretty much by definition no longer represent the broad editor base—I absolutely guarantee that not one editor in a thousand could even tell you who the current ones are. (I know it's a cliche but it remains true—out of all the functions WMF lists on its website, the only one 90% of editors and 99.9% of readers care about is We maintain the servers, build the software, and design the technology that keep these projects running.)
If I were designing the WMF's governance, I'd get rid of the community elected trustees altogether and have a board made of the Great and Good from big tech, the NGO/charitable sectors, and former senior civil servants and political figures. In place of the token elected trustees, I'd have an ironclad recall mechanism in which a suitable quorum of participants from multiple projects could force a non-negotiable removal vote amongst the members should one of those trustees turn out to be an asshole. (I'm sure you well know the past instances in which I'd have expected this process to be used.) ‑ Iridescent17:38, 24 August 2025 (UTC)[reply]
I've got my own views on what makes someone qualified for the WMF's Board, and it sounds like ours are compatible: People are qualified if they know how a corporation's board works. No matter how much social capital you have in some of the online communities, and no matter how many edits you've made, if you don't really know how a board works, then you aren't IMO qualified.
In terms of non-profit board structure, the usual rule of thumb, particularly for service organizations (e.g., a food bank) is that one third of the members should be wealthy donors, one third should have experience with the subject (e.g., people who received services from this or a similar organization in the past), and the remaining third should actually know what they're doing (e.g., experienced business managers). I therefore think that removing the community-focused seats would be considered, within the non-profit world, a departure from the tried-and-trusted format.
I would not agree that the WMF's Board will do whatever it wants regardless. They are constrained by practicalities on one side and by an explicitly imposed fiduciary duty to serve the WMF's charitable purpose (NB: not 'the community' or 'the volunteers') on the other. But within those constraints, doing whatever they think is the best way to promote their charitable purpose is what they're supposed to do. WhatamIdoing (talk) 18:45, 24 August 2025 (UTC)[reply]
Trying to get things done for the editing Community is bloody hard work (been there, done that). There needs to be a solid bridge between the Community and the WMF and the BoT ain't it. The first priority is to ban current and former WMF employees from becoming members of it, and disallow current employees and contractors from voting on the elections. Kudpung กุดผึ้ง (talk) 18:55, 24 August 2025 (UTC)[reply]
In my experience, the few current and former WMF staff who have volunteered to stand for election have done so out of a desire to blow the whole thing up. You might be excluding the people whose goals most closely align with your own. WhatamIdoing (talk) 19:02, 24 August 2025 (UTC)[reply]
On reading User:WhatamIdoing/Board candidates I think we agree on the broad point, that competence ought to be more important than either popularity or meeting ideological quotas. That said, the WMF is a genuinely global organisation; a rule of thumb that applies in California doesn't necessarily describe the 95% of the world that isn't the United States. Even in the UK—the country closest to the US both socially and in terms of legal structures—one third of the [board] members should be wealthy donors would at best draw a blank stare. I appreciate that the WMF is ultimately a US body and needs to follow US law, but it's always worth bearing in mind that to most of the world "wealthy donors" doesn't signify "successful", it signifies "corrupt". (I assume the WMF are well aware of this, and that's why they make such a big effort with fundraising campaigns even when they don't actually need the money, to maintain the impression that it's funded by small donations from users.) TL;DR, if you don't really know how a board works, then you aren't IMO qualified makes the whole thing US-centric since the way non-profit boards work in the US is very different to most other countries; what you actually want to look for is "capacity to learn how a board works".
I'd argue that people who received services from this or a similar organization in the past is meaningless when it comes to the WMF. Probably upwards of 80% of the world's population have used Wikipedia directly at some point (and most of the remainder have used broadly equivalent products like Baidu Baike), and when you factor in indirectly benefiting from somebody else using Wikipedia it must approach 100%. If one tries to narrow it down to "only the most significant users", then in terms of data usage it would probably be neck-and-neck between the big AI firms and Chinese bot farms, and in terms of editing it would be the crank WBE 1–1000 types, and I'm not sure any of the three are particularly well qualified. (Slight caveat that I do think Big AI should probably have some kind of representation when it comes to strategic planning at the WMF, but I see it more as an ambassadorial thing; I certainly don't think it would be a good idea to put Altman and Musk on the board.)
I agree 100% with There needs to be a solid bridge between the Community and the WMF and the BoT ain't it. From experience, on the one occasion when I genuinely needed a formal ruling from the Board, it was literally quicker and easier to privately hassle Jimmy Wales than to go via correct channels, and I'm someone who at least knows what the correct channels are. If we're going to have community representatives, they should be more along the lines of trade union representatives—as you (WAID) say, making them actual board members legally obliges them to take whatever position they best feel reflects empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and disseminate it effectively and globally, regardless of whether supporting that objective goes against the clear wishes of the people who elected them.
(Unpopular Opinion: if we're going to have community representation of whatever kind, the electoral constituencies should be "English Wikipedia", "Commons" and "Everyone else". However much the WMF may protest otherwise, en-wiki and Commons are the driving forces of the whole movement, and changes which affect them have far more impact than changes affecting any other project.)
I don't agree with ban current and former WMF employees from becoming members of it, and disallow current employees and contractors from voting on the elections, although I could get on board with banning current employees. There's a reasonable case to be made that someone who's worked for the WMF is better placed to know what the issues are—as long as they make any potential conflict of interest clear from the start, I don't see any particular issue. By the nature of Wikipedia, anyone who's in a position where they can get elected to a significant office is going to have a history of friends, enemies, pet issues and crank peeves—what matters is whether they can set their prejudices aside and act neutrally. (If I were tasked with finding potential candidates for the job of English Wikipedia's Ambassador to WMF, my first thoughts would be Fram, Risker, Johnbod and NYB—all of them come with huge amounts of baggage but they're all people I'd trust to know when to set the baggage aside.) ‑ Iridescent07:47, 26 August 2025 (UTC)[reply]
The trappings change between countries (the UK doesn't call them "wealthy donors"; it calls them "patrons", and I'd be astonished if the board of the Royal Opera House Covent Garden Foundation didn't have a majority that 'just happened' to be personally wealthy), but the fundamental point that being part of a board means working on a committee stays the same.
What matters most about knowing how a board works is knowing that you have almost zero personal power, and everything is about the group's decision. The main power of being on the board is being able to vote for or against a decision, and the main requirement is that you abide by the group's decision. Board members cannot implement policies, hire or fire people, sign contracts, set budgets, create goals, etc. by themselves; they must convince the rest of the committee to agree with them. And if you hate committee work, especially the logrolling, back-scratching, politicking, human-relationship parts of it, then you really shouldn't join any board. WhatamIdoing (talk) 17:45, 26 August 2025 (UTC)[reply]
Without bothering to look at their donor lists, I suspect something like the Royal Opera House board will be primarily superannuated politicians and formar company bosses, rather than individual donors. The UK model is certainly not ideal—as least when it comes to big organisations, it creates a ruling class of braying public school types who dominate charities and quangos—but the non-stop parade of scandals between 1994 and 2022 means the press are hypersensitive to anything that could possibly be considered a conflict of interest. (The wealthy donors are certainly there, but their payback comes in the honours list rather than as a seat on the board.)
I do agree wholeheartedly that the main qualification for a seat on a board—charity or otherwise—is understanding that the position of the board is to work whatever the agreed objective is. I also agree that most of the Wikipedia community are singluarly unqualified in that aspect. As I think I've already said somewhere in the morass above, rather than community elected trustees I'd much rather have ambassadors who are explicitly there to represent the editor base and the readership and aren't obliged to adhere to WMF objectives if they disagree with them. ‑ Iridescent17:32, 2 September 2025 (UTC)[reply]
For the benefit of the non-British who frequently post on this page I think we should explain that a 'public school' in the UK sense is anything but, and is a very special kind of institution. Not by choice, but I'm a product of one of the oldest schools in the country myself, and while I never became a millionaire patron of the arts or famous for appearances at The Oval or a shareholder in a quango, I possibly benefited from a well-rounded, but sometimes cruel education.
We've had enough of the BoT being the WMF's self-appointed mechanical rubber-stamping machine, and heaven forbid that a present - or former (I might conceed that point, it depends who they are) - WMF employee should be part of its rusty constitution. I certainly concur that while most Wikipedians are probably not suitably endowed with the academic or practical experience for a seat on such a unique board as the WMF's trustees, what is absolutely needed are members who are explicitly there to represent the editor base and the readership and aren't obliged to adhere to WMF objectives if they disagree with them.
The problem is however in finding those ambassadors who can relate not only to the need for financial transparency and necessary fund-raising and raiding, but also to the processes on the factory floor and the morale of the volunteers whose work generates those donations without reward and a luxury life style. It needs an equitable system for (s)electing them - avoiding conflicts of interest - and encouraging them to throw their hats in the ring. The ruling class here on Wikipedia are unfortunately often the ones who try to accrue social capital and climb its greasy pole by talking a lot, being bossy, and interfering with progress, rather than editors who do identify areas for seriously needed improvements to policies and processes and simply do it while faced with resistance from the WMF, and from editors who are obsessed with being Wikipedia's Stasi and live in their bubble of cruel authority like prefects in a British public school. Kudpung กุดผึ้ง (talk) 20:45, 2 September 2025 (UTC)[reply]
When you say things like "the BoT being the WMF's self-appointed mechanical rubber-stamping machine", I wonder whether you are conceptualizing the board as being separate from the WMF. At some level, the Board is the WMF, or at least part of it. WhatamIdoing (talk) 23:06, 2 September 2025 (UTC)[reply]
@WhatamIdoing, According to your definition, one could assume that you are suggesting that an independent, non-salaried body is not required at all and that the WMF is perfectly honest, reasonable, and transparent and needs no checks and balances. That said, perhaps you can understand why I do not think it's a good idea to allow present or former WMF staff to be members of the board - or even to vote on elections for it. Perhaps you are not familiar with the way the board functions in reality, or with the challenges the community faces when it needs a reaction from the WMF or its board.
I am on record of having said dozens of times that after all these years there is still no official bridge between the WMF and the volunteer communities, and that the BoT ain't it. The more the WMF grows and increases its staff, the further it distances itself from its major asset, the editing community - the previous WMF administration was a classical example. Things have improved somewhat with the new CEO who has kept her feet firmly on the ground, but this euphoria might be short lived when she leaves already in a few months. Kudpung กุดผึ้ง (talk) 23:56, 2 September 2025 (UTC)[reply]
The board is legally required to exist. No board → no corporation.
It doesn't make sense to think of the board as "independent" of the WMF. The board is the WMF. There is no separate entity, "the WMF", that gets to decide what it will or won't do. This is pretty simple hierarchical concept: The board sets the budget. The board hires (and when they believe it is necessary, fires) the CEO directly. The board decides how many staff they want to have. This is the board's job. This is not done by some separate entity that you've decided to call "the WMF"; this is all done by the board. WhatamIdoing (talk) 00:21, 3 September 2025 (UTC)[reply]
WAID, the way I think of it, it largely comes down to how one measures "success". Experienced editors tend to measure it differently than do WMF paid staff. The latter tend to look for easily quantifiable things, like numbers of new contributors, whereas the former tend to look more at things like quality of content and minimization of disruption. So, the Board is a part of WMF, but experienced editors may want Board members to be concerned with the things that concern us, and not disregard them in favor of staff-preferred statistics. --Tryptofish (talk) 00:04, 3 September 2025 (UTC)[reply]
It's not "staff-preferred" statistics; it's "Board-preferred" statistics.
I agree that experienced editors, including me, favor the "minimization of disruption". But maybe we also have other values, too. I am going to die one of these days, and since I can follow a basic logical chain to its unpleasant conclusion – Wikipedia is written by editors; editors eventually die; dead people don't write Wikipedia articles; if we want someone writing articles after the existing editors die, we need new editors – I, too, am independently interested in the numbers of new contributors. I happen to think those numbers look a bit weak at the moment. How much disruption (all newbies are disruptive, including me back in the day) am we willing to tolerate now, if the goal is to have editors here when we're not? WhatamIdoing (talk) 00:27, 3 September 2025 (UTC)[reply]
If it's "Board-preferred statistics", that's the problem, right there. (I'm actually not going to find fault with the desire for new editors.) --Tryptofish (talk) 00:32, 3 September 2025 (UTC)[reply]
I don't expect the Board to prefer the things you're interested in. They've got to look at their duty to the mission, which involves things like noticing that 80% of the people in the world don't speak English, and that therefore the Board should consider not devoting all their/their org's attention to the English Wikipedia. Naturally, as a long-time enwiki editor, I have my own preferences, but I can't say that theirs is unreasonable. WhatamIdoing (talk) 00:57, 3 September 2025 (UTC)[reply]
The Board should explicitly consider devoting a significant amount of its time to the English Wikipedia. It doesn't matter if 80% of the people in the world don't speak English, what matters is that 80% of the money is generated by the work of the en.Wiki volunteers, and if they don't get the support they need, of course their numbers will diminish over time. The problem is however in finding those ambassadors who can relate not only to the need for financial transparency and necessary fund-raising and raiding, but also to the processes on the factory floor and the morale of the volunteers whose work generates those donations without reward and a luxury life style. The BoT is the only body that can make the WMF sit up and listen instead of us having to go on the streets and demonstrate like we did ACTRIAL was important because it showed how disagreement between the WMF and the community can occasionally reach proportions requiring the Foundation to bend to the volunteers' consensuses for needed organic changes as they finally did at WP:ACPERM five years later. It shouldn't have to be like that. Kudpung กุดผึ้ง (talk) 01:10, 3 September 2025 (UTC)[reply]
Statements like The BoT is the only body that can make the WMF sit up and listen show me that you're still not grasping the fundamental legal and factual reality that the Board is the WMF. There is no WMF separate from the Board.
The US legal system does not agree with your view that a public charity should puts its attention on where the money comes from. The WMF is not a benefit corporation, whose job is to make money while also doing good. Its job is to do good, even if that means focusing on need instead of revenue. WhatamIdoing (talk) 01:35, 3 September 2025 (UTC)[reply]
I want to take a stab at explaining my thinking a bit better than I did above. I gave some examples of things that one might or might not care about, but I meant those only as examples. My more important point is to make a distinction between things that can readily be measured by statistics, and things that cannot, that depend on more in-depth and nuanced examination. It's understandable that the Board, indeed any board, might be attracted to statistical measures. Such measures are easy to present in annual reports and the like. It's easy to argue that something is headed in the "right direction", or that it needs more attention or resources. But I'm arguing that that's deceptive. I have real-life experience as a university faculty member (and universities are nonprofits with goals that somewhat overlap with those of the WMF), and I know from personal experience that some universities have Boards of Trustees (or equivalent bodies) that are focused on statistical measures, and other universities where the boards focus more on the intangibles. (These things are relative, of course, and few if any universities do entirely one or the other.) And I've seen repeatedly that the universities that "treasure what you can measure" tend to become dysfunctional, whereas those that value the intangibles become the academic success stories. I feel very strongly that this is true. So I'm applying that here, as well. Experienced editors learn over time about things that do, or don't, facilitate productive editing, and content that readers will value. That's extremely important knowledge, and it often focuses on intangibles. If the WMF BoT brushes that aside, thinking "we've seen the numbers, and we know better", therein lies the road to perdition. --Tryptofish (talk) 22:40, 3 September 2025 (UTC)[reply]
@Tryptofish, thank you for that. I couldn't have put it better myself. the key is in: If the WMF BoT brushes that aside, thinking "we've seen the numbers, and we know better", therein lies the road to perdition. which I have tried to explain to @WhatamIdoing below. Kudpung กุดผึ้ง (talk) 22:50, 3 September 2025 (UTC)[reply]
I think that in some ways, the Board is less metrics-driven than it used to be. Back in the day, the WMF used to have it's monthly all-hands "Metrics" meeting. For years, these were public events, and you can still watch the videos. But eventually the opening "metrics" section was removed, and nobody appeared to notice or complain. AFAIK the metrics are still being presented in a small meeting with a handful of managers and technical folks, so somebody's keeping an eye on it (and should: when participation drops precipitously in a particular wiki/language/country, that can signal a serious technical problem), but mostly it's of less importance than it used to be.
"Less" importance, however, does not mean "no" importance.
(Tryptofish, I'm not sure that experienced editors are good at identifying "content that readers will value". I think we are better at identifying content that other experienced editors will value. For example, readers want more pictures, and we try to limit them with rules like WP:GALLERY; readers want quick access to specific details like MPAA movie ratings and which identity groups a BLP belongs to, and we refuse to make those prominent, or even to include them at all.) WhatamIdoing (talk) 16:02, 4 September 2025 (UTC)[reply]
I think it's telling how you make the case about "what readers value". You based it on metrics. You didn't base it on thinking about what makes Wikipedia more respected by the public at large than the average Google hit. --Tryptofish (talk) 18:34, 4 September 2025 (UTC)[reply]
What metric did I give for "what readers value"?
I am responding to your claim that Experienced editors learn over time about...content that readers will value, so of course I'm talking about what readers value. If you'd instead written Experienced editors learn over time what makes Wikipedia seem more respectable to the public, I'd have given you a different answer (because there is solid research on this subject, and it mostly involves the general public not noticing that this is 'the encyclopedia that anyone can edit'). WhatamIdoing (talk) 20:47, 4 September 2025 (UTC)[reply]
That comes from multiple sources, including unsolicited freeform feedback.
A metric would sound like "Wikipedia articles should average two images". What we have is "Readers say they like pictures and wish there were more of them in Wikipedia articles". WhatamIdoing (talk) 21:25, 4 September 2025 (UTC)[reply]
@WhatamIdoing, That's neatly side-stepping the topic of this thread, but FWIW, the WMF is actually doing nothing to increase the number of genuine users and new content that has any measurable impact after 3 years of the Growth Team's concenbtration on its pet invention and squandering over $1mio on it. They've been handed a solution on a plate by the community, free of charge, that will increase new, really appropriate content, encourage and help new users to do so, and greatly reduce the burden on NPP (which is another process that WAID once claimed to be superfluous) and AfC. However, since the solution is so simple, hardly needs any coding, and is backed by stats already, the WMF refuse to discuss it because it it not 'their' initiative. Kudpung กุดผึ้ง (talk) 00:46, 3 September 2025 (UTC)[reply]
You know those forced-choice survey questions, where you have to choose the answer that's closest even if you don't like any of the options? I suggest one for you:
Why does the Growth team exist?
Because the Board wanted the Growth team to exist, because the Board wants a product team focused on new contributors.
Because the Growth team sprang fully formed from the forehead of the staff and has managed to exist and get funding from the Board for many years, despite the Board's objection to it.
What we need is more direct information aimed that those new editors/page creators before they start to make an article but these suggestions either fall on deaf ears or are neatly sidetracked by the stats-obsessed Foundation who at the end of the day appear to care only for increased figures for creations and new editors and not for quality control. Kudpung กุดผึ้ง (talk) 12:38 pm, 12 May 2013, Sunday (12 years, 3 months, 26 days ago) Are you trying to tell us something has changed? Kudpung กุดผึ้ง (talk) 02:09, 3 September 2025 (UTC)[reply]
Apparently it has changed. The Growth Team (v. 2014 – there have been WMF multiple teams re-using various names over the years, and this is one of them) now provides "direct information aimed that those new editors/page creators before they start to" edit via Special:Homepage. WhatamIdoing (talk) 03:04, 3 September 2025 (UTC)[reply]
@WhatamIdoing, It might interest you to know that I am right up to date with the homepage and the mentoring projects. I will not be so indiscreet as to reveal to you the most recent exchanges of email with the WMF, but FYI, nothing has changed and is not likely to for another several years when they have used up the money on their current projects and think up something to spend the next budget on - and it won't be on any brilliant ideas that come from the community. And that's why the=y are refusinbg to consider a a perfectly conceived, free solution. They don't want their existence threatened by the better skills that can be found among the volunteers. And to get this Command hierarchy? thread back on track: that's where the BoT should step in. Kudpung กุดผึ้ง (talk) 05:25, 3 September 2025 (UTC)[reply]
The point of this thread is that Wikipedia's volunteer editors should not boss around their fellow volunteer editors. Whether the Board should do what you recommend instead of what others recommend is not relevant. WhatamIdoing (talk) 17:15, 3 September 2025 (UTC)[reply]
You seem to have got the negatives and positives in your claim the wrong way round. Let's correct it: The WMF is a benefit corporation, whose job is to make money while also doing good. Its job is to do good, even if that means focusing on its own needs and salaries instead of increasing new articles and the quality of them and addressing the needs of the people that provide the content. BTW, I'll find that diff, but for the moment I have a full working day ahead of me. Kudpung กุดผึ้ง (talk) 02:21, 3 September 2025 (UTC)[reply]
Quoting from the first sentence of the article at Benefit corporation: "a benefit corporation (or in some states, a public benefit corporation) is a type of for-profit corporate entity.
The Wikimedia Foundation, Inc. is not a for-profit corporation. It is a non-profit corporation. It therefore cannot be a benefit corporation.
I am not amused. You know perfectly well what I am implying, you worked for the WMF long enough to know what happens in reality, and I am one of the people who moved and shook a few things in my time to get them done. Try again. Kudpung กุดผึ้ง (talk) 05:11, 3 September 2025 (UTC)[reply]
Please read it again. I did not 'recommend' anything. What I said cynically paints the picture of reality. No one on this Wikipedia is in any doubt as to my stance vis-à-vis the WMF. The Growth Team, which is now part of a specific group, 'Contributors', has its 'own strategy for the next 5 years' which they summarise as "We are currently working on a metrics strategy that will define a core metric we’ll aim to move with this product strategy, alongside indicator metrics that will ladder up to." (Could someone translate that into English please? Even in its native German, a very pragmatic language, it does not make an iota of sense). By doing what they 'think' Wikipedia wants instead of taking their cues from the people who know, they are effectively hindering progress and ironically causing the greatest damage to the quality of the encyclopedic content that some of us strive to maintain. Kudpung กุดผึ้ง (talk) 22:44, 3 September 2025 (UTC)[reply]
You wrote that you believe the WMF's "job is to do good, even if that means focusing on its own needs and salaries instead of increasing new articles and the quality".
I disagree. I think the WMF's (both the board's and staff's) job is to promote their charitable purpose. That may, in practice, require some amount of attention to the board's and staff's "own needs", but it has much more to do with helping the general public become more educated. WhatamIdoing (talk) 15:51, 4 September 2025 (UTC)[reply]