Policy Technical Proposals Idea lab WMF Miscellaneous 
The idea lab section of the village pump is a place where new ideas or suggestions on general Wikipedia issues can be incubated, for later submission for consensus discussion at Village pump (proposals). Try to be creative and positive when commenting on ideas.
Before creating a new section, note:

Before commenting, note:

  • This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
  • Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.

Discussions are automatically archived after remaining inactive for 10 days.

« Archives, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73

Add a bot/policy that bans AI edits from non-extended confirmed users

edit

I saw this thread yesterday and I wanted to chime in this idea I had, but I waited to long to act on it and now it's archived. So I guess I'll have to make a new thread.

It's clear that lots of new editors struggle making good content with AI assistance, and something has to be done. WP:G15 is already a good start, but I think restrictions can be extended further. Extended confirmation on Wikipedia is already somewhat of a benchmark to qualify editors to edit contentious articles, and I think the same criteria would do well to stop the worst AI slop from infecting mainspace. As for how this would be implemented, I'm not sure - a policy would allow human intervention, but a bot designed like ClueBot NG might automate the process if someone knows how to build one. Koopinator (talk) 10:50, 18 October 2025 (UTC)Reply

I do t see a practical way to enforce that. I also dont think that peoples skill level with AI can transfer to an assessment of their skill level in wikipedia. —TheDJ (talkcontribs) 11:31, 18 October 2025 (UTC)Reply
Regarding enforcement, I would suggest:
1. Looking at whatever process ClueBot uses to detect and evaluate new edits, and add a "extended confirmed/non-ec" clause.
1.1. I will admit I'm not entirely sure of how this would work on a technical level, which is why I posted this idea in the idea lab.
2. Look to word frequency as in User:Gnomingstuff/AI experiment to distinguish AI from non-AI edits. Koopinator (talk) 15:32, 18 October 2025 (UTC)Reply
please don't use this in any kind of blocking enforcement capacity, it is not remotely ready for anything like that Gnomingstuff (talk) 17:41, 20 October 2025 (UTC)Reply
A person's willingness to use AI on Wikipedia is an immediate and absolute WP:NOTHERE, in my opinion. TooManyFingers (talk) 05:50, 4 November 2025 (UTC)Reply
Too sweeping an opinion in my opinion. First you would have to be talking about specifically using unsupervised AI to write articles. Secondly I think it would be "insistance" rather than "willingness". And thirdly it could well be a WP:CIR or user education issue rather than a NOTHERE one. All the best: Rich Farmbrough 18:03, 6 November 2025 (UTC).Reply
Do you have any evidence that extended confirmed users create any better edits with AI than users who are not extended confirmed? Phil Bridger (talk) 14:33, 18 October 2025 (UTC)Reply
I would say it's a reasonable inference. Here's what I can say:
  • We can expect that extended-confirmed users are more likely to be familiar with Wikipedia's policies and guidelines, by virtue of having been here longer.
  • Some anecdotal evidence:
    • [1] LLM edit with no sources, survived for almost 2 months. Was created by an editor who was neither confirmed nor extended confirmed.
    • [2] Personal project by yours truly, AI assistance was used, careful review of text-source integrity of every sentence as I constructed the page in my sandbox over the course of 59 days before airing it.
  • I admit none of this is hard evidence.
I do feel LLM has its place on the site (otherwise I wouldn't have used ChatGPT assistance in constructing a page), but if it's allowed, the barrier for usage really should be heightened. Wikipedia's content translation tool is also restricted to extended-confirmed users.
Koopinator (talk) 15:25, 18 October 2025 (UTC)Reply
The issue is raising the bar to prevent bots from editing Wikipedia using LLMs. LDW5432 (talk) 19:57, 27 October 2025 (UTC)Reply
LLM detection for text is very hard and has far, far too many false positives, especially for non-native speakers and certain wavelengths of autism. Aaron Liu (talk) 16:41, 18 October 2025 (UTC)Reply
^ This is my experience. Also, a lot of edits are too brief for the already-dodgy AI "detectors" to be reliable for.
@Koopinator, you've made around 2,000 mainspace edits in the last ~2 years. Here's a complete list of all your edits that the visual editor could detect as being more than a handful of words added.[3] It's 78 edits (4% of your edits) – less than once a week on average. And I'd guess that half of your content additions are too short to have any chance of using an anti-AI tool on, so the anti-AI tool would check your edits two or three times a month. Why build something, if it could only be useful so rarely? WhatamIdoing (talk) 00:58, 19 October 2025 (UTC)Reply
Well, how would that tool's frequency scale across the entire Wikipedia community? I'd imagine it'd be used at least a little bit more often then. (or, I imagine, multiple orders of magnitude) Koopinator (talk) 05:55, 19 October 2025 (UTC)Reply
For brand-new editors, it might capture something on the order of half of mainspace edits. High-volume editors are much more likely to edit without adding any content, so it'd be much less useful for that group. WhatamIdoing (talk) 19:54, 23 October 2025 (UTC)Reply
We could at least use a flagging system for vandalism review. LDW5432 (talk) 14:05, 6 November 2025 (UTC)Reply
It should be possible to detect low hanging fruit AI text, based on certain common features. Raw AI inference cut and pasted from a chat bot is going to be easier to detect. I agree that the type of user doing this probably has no reputation at stake, doesn't care very much, more likely to be newbie and/or a non-native speaker from another Wiki. I don't know about policy, but a bot that sends a talk page notice, or flags the edit summary with a "[possible ai]" tag. No one is already working on this? -- GreenC 17:10, 18 October 2025 (UTC)Reply
mw:Edit check/Tone Check uses a Small language model to detect promotionalism. (See tagged edits.) I'd guess that it would be possible to add an AI detector to that, though the volume involved would mean the WMF would need to host their own or pay for a corporate license and address the privacy problems.
mw:Edit check/Paste Check is probably more efficient, though, as anyone copying from a chatbot is going to be pasting it into the article, and detecting a big paste is easier than checking the words that were pasted in. WhatamIdoing (talk) 01:04, 19 October 2025 (UTC)Reply
I think AI edits should be mandatory for everyone to disclose, both in articles and talk pages. There could be a box where you check it if your content comes from AI or is mostly AI, similar to how you can check minor edits. Bogazicili (talk) 18:40, 21 October 2025 (UTC)Reply
Having a UI element like that would work towards legitimizing LLM use in creating text for Wikipedia. Merko (talk) 00:41, 22 October 2025 (UTC)Reply
I agree: Either it will allow the material to be posted and thus legitimize LLM use, or it won't allow the material to be posted and cause people to tell lies so they can get it posted. WhatamIdoing (talk) 02:18, 22 October 2025 (UTC)Reply
Do we currently have a policy on LLM usage? This one seems failed Wikipedia:Large language model policy
My position is that if it's not banned, it should be declared. Bogazicili (talk) 10:45, 23 October 2025 (UTC)Reply
I thought the failed policy proposal was supposed to require people to declare it. WhatamIdoing (talk) 20:00, 23 October 2025 (UTC)Reply
Almost 2 years ago. Merko (talk) 22:09, 23 October 2025 (UTC)Reply
LLM-generated content is a cancer on Wikipedia, and it will only get worse. "AI detectors" have many false positives, as do checks made by editors themselves, but just because we can't reliably detect something today doesn't mean we shouldn't implement a policy against it. I support mandating the disclosure of LLM-generated contributions by all users. We don't treat WP:GNG differently on articles created by extended-confirmed users or others, we shouldn't do it here either. Merko (talk) 22:21, 21 October 2025 (UTC)Reply
If you think original content generated by a program is a negative to that extent, then I don't think requiring disclosure is the appropriate approach, since that would only be a prelude to removal. We should skip straight to requiring editors not to use programs to generate original content. isaacl (talk) 04:38, 22 October 2025 (UTC)Reply
Wikipedia should first address LLM content from anonymous IPs. LDW5432 (talk) 19:56, 27 October 2025 (UTC)Reply
IP editing actually isn't that much of a problem here -- in my experience almost all AI text I find came from someone with a registered account. Off the top of my head I'd say less than 10% of it comes from IPs.
This may change with temporary accounts in a few days though, who knows. Gnomingstuff (talk) 20:56, 30 October 2025 (UTC)Reply
I came here to propose pretty much the same thing (policy, not bot). Having a blanket rule would be hugely helpful in dealing with editors, since it can get very tedious explaining why each AI edit they claim to have checked is in fact problematic. I might even go so far as to propose a separate user right (or pseudo-right?) called something like LLM user, for editors who can demonstrate they are sufficiently competent with content policies and have a legitimate use case. I don't think such a right should convey any actual abilities, but users found to be using LLMs without it could then be much more easily censured and guided towards other forms of editing. Applying exactly the same system but tying it to extended confirmation seems like it minimizes potential rule creep, but it's a blunter filter which might not be as effective, since I'm sure there are plenty of extended confirmed users who lack the requisite understanding of policy. lp0 on fire () 21:03, 10 November 2025 (UTC)Reply
That is probably a good idea, but I don't see any way to enforce it automatically and also do it well, as it would not be good if someone got flagged for using AI when they did not, and Wikipedia is so large it would happen a lot. I believe that AI should be used extremely rarely on Wikipedia, as it is known to hallucinate mis-information and drag on and on about things that don't matter (see: Grokapedia, or search up AI hallucinations). It has many chances to cause things to go awry, and should not be made main-stream as a way to enhance/speed up editing. I suggest it is done by humans. If a new user joins Wikipedia and is flagged or seen on talk pages, maybe give there edits a look, just to make sure there doing good. Some ways to spot AI writing is looking for constant pairs of 3's (like, LOTS, basically every sentence), un-usual use of Em dashes,(looks like a bigger hyphen, — Vs. -) as they are not on a normal keyboard and either take a copy and paste or a very unique keyboard shortcut to type, repeating info or full paragraphs that don't really say/mean anything. A lot of these are hard to give examples for and you just have to see them for the first time to start noticing. Overall, I agree that there should be restrictions on AI edits. Oak lod (talk) 15:49, 20 November 2025 (UTC)Reply
I strongly support the suggestion and would even go as far as suggesting a new flag. The AI as a tool is similar to WP:AWB: in unskilled or malicious hands it can do a lot of damage in a short amount of time. Correspondingly, use of AWB is not allowed for drive-by accounts. Similar logic applies to AI, IMHO. For the avoidance of doubt, I think that proper use of AI improves articles, so I think that we should regulate the use of AI, and not prohibit it. Fear of outright hallucination is overblown, as far I can tell: as long as the input was explicitly restricted to correct sources (either a foreign-language Wikipedia article or manually-selected WP:RS), there were no hallucinations. Note that texts of RS you are planning to use for the article should be fed to the engine first in their entirety, as for some reason the AI engines are really shy when it comes to actually fetching information off the Web (I suspect there are legal reasons in play here), so if you just point to the sources, AI will start generating ideas of its own, not summarizing the WP:RS as it should. Викидим (talk) 00:14, 24 November 2025 (UTC)Reply
What if we make a box that allows people to flag their own edits as AI-assisted, and a warning that lets people know that fully AI-generated content will be taken down in accordance with a policy and partially AI-assisted content must be marked so that humans can review it or it will be taken down if not marked. (if there's not a policy to ban unreviewed AI text already, make one). Then, we make a bot like Cluebot to detect AI slop and revert it and leave a warning, but we have it set to be very cautious so it minimizes false positives. I think this would solve the problem and it neatly combines all the ideas I saw above. RBarr-12@wiki:~/user/talk/contribs 20:07, 2 December 2025 (UTC)Reply
That's probably the best solution. Good idea.
Oak lod (talk) 20:14, 2 December 2025 (UTC)Reply
IDK about the technical feasibility of scanning all edits with a bot, but the policy side of this is just WP:LLMDISCLOSE. -- LWG talk 20:42, 2 December 2025 (UTC)Reply

Consistent display of coordinates

edit

When reading articles about geographic locations in desktop mode, I am slightly annoyed if the coordinates are not available in a convenient and predictable spot near the article title. This forces me to hunt for the coordinates in the infobox or article body. It also means that the article will not be correctly geotagged.

For some examples of articles that have this issue, due to using {{coord}} with |display=inline alone, see Yerevan, Matera, Duluth, Minnesota, San Luis Potosí (city), and Shivneri Fort. Also note, for example, that Shivneri Fort will not show up when viewing Special:Nearby#/coord/19.199,73.8595.

Conversely, when browsing on mobile, coordinates added using |display=title alone aren't visible at all. For some examples of articles with this issue, see Islandmagee, Ostia (Rome), and Matthias Church.

To avoid both of these problems, I would tentatively propose that |display=inline,title should be preferred in most* articles about settlements or geographic features. It seems that it would be possible to use a bot or semi-automated script to enforce this rule.

Perhaps my proposal is already the accepted approach and the articles above have just unintentionally deviated from it, but I'm not sure. MOS:COORDS doesn't really seem to address this issue and I couldn't find any other relevant guideline. This issue has probably been discussed before; links to past threads would be appreciated.

* There are obviously cases where |display=inline is appropriate. For example, the article Extreme points of the United Kingdom discusses several different points and it would be wrong to geotag the entire topic to any specific one. There are likely other edge cases I haven't thought of. I'm only referring to how to format the "main coordinates" in articles about uniquely identifiable locations: villages, mountains, buildings, etc. ~2025-32085-07 (talk) 23:36, 9 November 2025 (UTC)Reply

Hello. In my opinion, the title is a goofy spot for coords and we should list them only in the infobox alongside all the related metadata about a place. It's a weird historical artifact and anachronism that the coords get such special placement and their special page placement has been a constant headache for years with different views and different skins, as you note. Is there a reason coords are so special that they can't be put in the infobox? The coords seem as relevant to Pittsburgh as its population. --MZMcBride (talk) 20:47, 10 November 2025 (UTC)Reply
Coordinates are still somewhat “special” in that they link to an external tool. However I personally don’t think that’s reason enough to separate them.  novov talk edits 00:02, 12 November 2025 (UTC)Reply
They don't require this, we make a choice (we can also show them with the built in maps, but it's difficult to change something that has been around for as long as this. They are mostly special, in that they have to directly relate to the primary topic of the page and the page has to detail a specific spot that is not too large or otherwise vague. —TheDJ (talkcontribs) 11:33, 13 November 2025 (UTC)Reply
I would argue that a city's coordinates are a more defining property than its population. Population numbers change over time, coordinates generally don't. As for what's of greater value to readers, IDK.
Personally speaking, I find myself clicking coordinate links very frequently. The ability to view a location on a map is immensely useful. Even for articles that include a locator map image or embedded "Wikimedia Map", I find GeoHack useful because of the links it provides to external services.

Something else I'll mention, but which probably deserves its own discussion, is that WikiMiniAtlas now seems redundant to Wikimedia Maps. WikiMiniAtlas was great for its time but its design now feels outdated. The aesthetic recalls the early days of Web 2.0, there's no support for pinch to zoom, etc. The one area where WikiMiniAtlas shines is that it does provide links to other nearby articles. I'll admit that's a pretty major feature, arguably even the main feature.
(Also, is it just my imagination or is WMA's projection extremely distorted? WMA always seems to be stretched out along the east-west axis. Compare Iceland on WMA vs. OSM.) ~2025-32085-07 (talk) 07:10, 19 November 2025 (UTC)Reply
Coordinates do change over time if you give it enough time. 😀 Anomie 12:57, 19 November 2025 (UTC)Reply
also wondering myself how people even find coordinates. I had to remove some from a page recently for being totally wrong. ← Metallurgist (talk) 04:51, 13 November 2025 (UTC)Reply
I've also occasionally come across incorrect coordinates in Wikipedia articles. At least in the cases I've seen, the mixups sometimes arise when multiple nearby localities have similar names. ~2025-32085-07 (talk) 07:35, 19 November 2025 (UTC)Reply
I've pointed this out on a few talk pages, but generally when it comes to coordinates, maps, and stuff like that all Wikipedia MOS goes out the window. Having coordinates without a source is original research. Having a locator map without a source for the boundaries is original research. There is almost no quality control, and rather rather then removing inaccurate or poorly sourced maps/geographic information, people argue they should be left until someone offers a better one. Really a huge issue, as a cartographer I'm a bit appalled. GeogSage (⚔Chat?⚔) 07:49, 19 November 2025 (UTC)Reply
Wikipedia:No original research defines original research as material for which the real word doesn't have a source saying that, which is importantly different from material for which the Wikipedia article doesn't cite an article. WhatamIdoing (talk) 18:41, 19 November 2025 (UTC)Reply
Having a boundary file without a citation is like a direct quote without attribution. There are several maps where the boundaries are user generated, or appear to be, and people grab coordinates for places from a map but don't have a source verifying that those are the actual coordinates. Going onto Google Earth, grabbing a bunch of points, and making a map that says those points are the locations of _______ is OR. Boundaries are often challenged by official organizations, stating "This is where the border for ____ is" without stating where we got that information would not be acceptable in text. GeogSage (⚔Chat?⚔) 02:55, 26 November 2025 (UTC)Reply

Idea for International Mentoring Day '26 & beyond

edit

Recently I have learned that there is an International Mentoring Day on 17 January. The UK and the US also have national commemorations to celebrate mentoring and thank mentors of all sorts (i.e. in corporate mentoring programmes; adult-led youth groups; and teaching). In the UK, this is 27 October; in the US, the entire month of January.

With this in mind, I would like to propose that Wikipedia:

  • Start an annual commemoration on January 17 of this coming year with notification about the day somewhat in advance, and encouragement to all editors to take a few minutes to thank their mentors whether current or past, as well as those who offer guidance as Teahouse, Help Desk, and Village Pump staff;
  • Share stories about how mentoring helped; and
  • Offer "Did You Know?" tidbits around and on January 17 about how the commemorations came about in the UK and the US.

As we are a little over 9 weeks away from January 17, there would be adequate time to plan for its commemoration on Wikipedia if the decision is taken to carry this idea forward. ~2025-33078-41 (talk) 17:52, 12 November 2025 (UTC)Reply

The problem with days of X is that anyone can declare any day the day of X and these things die after a year or two when a few people forget about them.
Also I haven't really seen much active mentoring on Wikipedia, but that can be my fault because it is not the kinda thing I would notice. Polygnotus (talk) 03:42, 20 November 2025 (UTC)Reply
There really is an International Mentoring Day on 17 January. It was started as an extension of the US National Mentoring Month (held throughout the month of January), but is now encouraged worldwide.
Because mentorship is an important part of Wikipedia for many editors, it just seems like promoting the day would be a wonderful way to honor those who serve in this way.
Do you have any idea where else in the world of Wikipedia that this suggestion could be raised with greater likelihood of taking it further? ~2025-36716-26 (talk) 10:13, 27 November 2025 (UTC)Reply
No clue, sorry. Polygnotus (talk) 10:32, 27 November 2025 (UTC)Reply
I think I have just found what seems a good step to move forward with this idea: to make a "Central Notice banner request." ~2025-37075-42 (talk) 16:54, 28 November 2025 (UTC)Reply
Central Notice banners are rarely used and for fully fleshed out ideas with consensus behind them that have been implemented already.
So far you reached one person, and they were not enthusiastic about the idea.
Is there a reason you would like to push this, which could include but is not limited to being involved with the people/an organization who/which decided to give that day that label or who/which joined the initiative? Polygnotus (talk) 17:07, 28 November 2025 (UTC)Reply

History Viewer User group?

edit

Hello all. I've been working on a bit of a proposal with some admins, which I've included below.

While the viewdeleted bundle of three userrights: browsearchive, deletedhistory, and deletedtext are currently only accessible to administrators, that does not necessarily comprise the only group that would derive a benefit to workload in having access. For example, those working in copyright, edit filters, SPI, and many other areas dealing with content likely to be deleted due to disruption or other reasons would benefit immensely from having direct access to deleted revisions. It also includes a swath of people who simply do not wish to be an admin, for whatever reason, but would benefit from this in anti-abuse workflows. I propose that a process be established to grant some viewing permissions to those qualified to be able to view deleted revisions, but not necessarily needing the full admin toolkit. I'm aware this is unbundling, though I believe it avoids the perennial proposals of unbundling by not touching the delete, block, or protect tools at all, and instead focusing on its intended purpose.

Thus I propose that a History Viewer group be added, with the following permissions:

  • Search deleted pages (browsearchive)
  • View deleted history entries, without their associated text (deletedhistory)
  • View deleted text and changes between deleted revisions (deletedtext)
  • View log entries of edit filters marked as private (abusefilter-log-private)
  • Enable two-factor authentication (oathauth-enable)

The group would be grantable/revokable by admins and the process for requesting the permission would be to post onto a dedicated PERM page, with a request that remains for a period of at least one week. The discussion must be advertised to AN, VPR, and BN. If the administrator closing the request finds that there is consensus to grant, they will add the permission to the requesting user. Editors applying should have a minimum of 2,500 edits and at least 6 months tenure.

EggRoll97 (talk) 22:28, 12 November 2025 (UTC)Reply

How is this compatible with the views expressed by our overloads the WMF at Wikipedia:Viewing deleted content? GreenLipstickLesbian💌🦋 22:54, 12 November 2025 (UTC)Reply
See Wikipedia_talk:Requests_for_adminship/Archive_269#WMF_reply_about_userrights, particularly the response from Joe there, I think the general consensus is that the issue is trust. An RfA process with community votes implicitly proves that the user has this trust from the community. While the risk of deleted content containing extremely private information is low, it is not zero, and as such we'd not be comfortable allowing users access to this without first proving they have the trust of the community. I believe this process would be adequate to ensure trust of the community. EggRoll97 (talk) 23:31, 12 November 2025 (UTC)Reply
If you want to view deleted content then you need to either pass RFA, pass an equivalent process (e.g. an admin election) or be granted the permission by arbcom. So a request for this new right would require the support of a majority of those commenting and at least 25 supporters. I don't see the benefit in creating a new process when we already have RFA and AELECT. Thryduulf (talk) 23:30, 12 November 2025 (UTC)Reply
Over the last few years, we have made relatively large strides in making adminship more accessible to more members of the community. I suspect that many of the people who could pass an RfA-like process which would be required to gain access to a permission like this could just go straight for RfA or AELECT and get the full toolset anyway. We want to encourage that too: I fear a permission like this could negatively affect admin recruitment if people feel like they need to go through this intermediate hoop first. Mz7 (talk) 23:48, 12 November 2025 (UTC)Reply
I disagree, your argument could be applied to any user right because an admin has it. Most admin candidates have some form of advanced permissions anyway. Tenshi! (Talk page) 16:18, 15 November 2025 (UTC)Reply
Well, and this is mostly before my time, but I believe that there is a correlation between removing rollback from the admin bundle and an increase in RfA standards. I believe rollback was removed from the admin bundle in early 2008? Compare that to the chart at Wikipedia:Requests for adminship by year. GreenLipstickLesbian💌🦋 21:16, 15 November 2025 (UTC)Reply
Possibly, but at that time we had more editors than we do now, which has also been dropping since 2007. Tenshi! (Talk page) 21:36, 15 November 2025 (UTC)Reply
Well, don't think you can deny the correlation- but yes, that in an equally valid hypothesis as well. GreenLipstickLesbian💌🦋 22:44, 15 November 2025 (UTC)Reply
You made an excellent observation, and I think it is the correct response to Tenshi’s argument: indeed every user right that we have unbundled from the admin toolset over the years, from rollback to template editor to page mover, has made adminship a little less desirable for the people who would have benefited from the rights we unbundled. If we unbundle the ability to view deleted page histories too, then that too will also negatively impact admin recruitment efforts. Mz7 (talk) 19:06, 19 November 2025 (UTC)Reply
This permission is a core sensitive spot for why adminship is turning into a big deal. A while back, I tried to unbundle everything except this userright to make a patroller permission - IIRC the primary objection was that it wasn't technically possible. Tazerdadog (talk) 23:58, 12 November 2025 (UTC)Reply
deletedhistory is security through obscurity. It's available to anyone through the API. (Example.) —Cryptic 00:12, 13 November 2025 (UTC)Reply
Without deletedhistory you can't add drvprop=comment to that query. deletedhistory also lets you see revision-deleted user names and comments. Anomie 01:02, 13 November 2025 (UTC)Reply
Is there a reason why edit summaries are hidden but the rest of the metadata (including the sha1 of the wikitext) is shown? Children Will Listen (🐄 talk, 🫘 contribs) 00:38, 16 November 2025 (UTC)Reply
Comments in T51088 indicate that WMF Legal wanted them omitted because sometimes admins don't bother to revision-delete RD-able material if the page is being deleted anyway, since historically both had the same end result of hiding the content from non-admins. The sha1 doesn't tell you much unless you already have the content to compare to. Anomie 01:20, 16 November 2025 (UTC)Reply
... and that's a bug, right? I didn't know this was a thing. I would be surprised if that were intentional. Otherwise why not write a user script to make deletedhistory trivially available to everyone? Mz7 (talk) 13:45, 13 November 2025 (UTC)Reply
There are already user scripts, User:SD0001/deleted-metadata-link and User:DreamRimmer/DeletedMetaData. Tenshi! (Talk page) 13:49, 13 November 2025 (UTC)Reply
No, it's not a bug. This goes back to 2019, bringing parity with access available in Toolforge since 2013. And as I noted above, you need deletedhistory to see comments (edit summaries) of deleted revisions and to see revision-deleted usernames and comments. Anomie 20:26, 13 November 2025 (UTC)Reply
Okay, huh. TIL, I guess. Mz7 (talk) 01:01, 14 November 2025 (UTC)Reply
I've been monitoring this section for a while. I for one don't think information about how to access metadata on deleted edits should be so obscure simply because it's so counter-intuitive as noted above). I knew that the limited info was accessible via Toolforge but not the other methods. To this end, I've made this edit to Wikipedia:Viewing and restoring deleted pages, incorporating comments mostly from this discussion by Cryptic, Anomie, and Tenshi Hinanawi. Any tweaks would be greatly appreciated, of course. As for the proposal at hand, I'm generally supportive of unbundling ideas, but I think more concrete examples of how this new right would assist affected users' workflows would be helpful, especially situations where the available metadata about deleted edits wouldn't be enough. Speaking strictly for myself, in my unique situation on enwiki as a non-admin importer who does wiki-archaeology, the usefulness of a usergroup like this would be greatly enhanced by adding the ability to delete/undelete edits (which I know is off-limits in this proposal because it touches on the "core" admin rights of block/protect/delete. For me 90% of my questions about deleted edits can be answered using available tools (or at least I can make educated guesses based on the information I have) and a very small percentage (maybe 1%?) of requests related to deletion/undeletion can be resolved by just checking deleted text. Graham87 (talk) 10:16, 18 November 2025 (UTC)Reply
Why the requirement to advertise at VPR? I don't think any other permission has required that. Tenshi! (Talk page) 12:33, 13 November 2025 (UTC)Reply
I wasn't particularly sure where to put advertisement requirements, since it would need to be widely advertised to satisfy the WMF. I guess maybe a watchlist notice would suffice, similarly to RfA? EggRoll97 (talk) 06:10, 14 November 2025 (UTC)Reply
It doesn't really seem clear about that part what the WMF wants, though it might be better to advertise at WP:AN and WP:VPM instead? Tenshi! (Talk page) 11:21, 15 November 2025 (UTC)Reply
I think that would suffice, yeah. EggRoll97 (talk) 02:07, 18 November 2025 (UTC)Reply
Whom do you envision needing this ability, and whom the community says is trustworthy enough to have this ability, and yet is unable to pass WP:AELECT? WhatamIdoing (talk) 06:13, 19 November 2025 (UTC)Reply
For example, those working in copyright, edit filters, SPI, and many other areas dealing with content likely to be deleted due to disruption or other reasons would benefit immensely from having direct access to deleted revisions. EggRoll97 (talk) 14:09, 19 November 2025 (UTC)Reply
If those people need access to deleted revisions they should stand for adminship. A demonstrated good track record that they would need for this new right will be exactly as good at demonstrating suitability as an admin. Note that being an SPI clerk, edit filter helper/manager, etc. doesn't require adminship and also aid chances when standing at RFA (and presumably AELECT but I don't recall whether anyone in those groups has stood using that process yet). Thryduulf (talk) 14:19, 19 November 2025 (UTC)Reply
Then you would have a single-purpose admin who only looks at deleted revisions? Tenshi! (Talk page) 15:28, 19 November 2025 (UTC)Reply
See also Wikipedia:Requests for adminship/Carrite. Children Will Listen (🐄 talk, 🫘 contribs) 15:42, 19 November 2025 (UTC)Reply
As for your comment about anyone in those groups standing in AELECT, I have. EggRoll97 (talk) 15:51, 19 November 2025 (UTC)Reply
Someone working in copyvio needs the delete button, too, so they really should be admins. I don't understand why someone "working in" Wikipedia:Sockpuppet investigations would need access to deleted materials (more than any other editor who encounters a suspicious editor).
viewdeleted is an incredibly sensitive user right. It allows people to see not just copyvios and vandalism, but sometimes things that should be oversighted (e.g., personally identifying information). We need to be able to trust people who have this user right to not spread what they see elsewhere. The real world struggles with this,[4][5][6] so we have to be careful here. WhatamIdoing (talk) 18:56, 19 November 2025 (UTC)Reply
@WhatamIdoing: Some people need access only to the content of deleted revisions of copyright-violating content (e.g., VRT agents). Someone who wants viewdeleted access to help with copyvios on English Wikipedia would not be suited for this right. JJPMaster (she/they) 21:24, 25 November 2025 (UTC)Reply

Merging OSM and Pushpin and custom maps into one radio-button element

edit

Hi, I propose in articles like Bushehr or Dubai, OSM map and Puhspin and satellite maps be merged into one item, which can be set by an argument named "mergedMap", that its value is like value of pushpin map except it can accept OSM and custom maps, like this:

| mergedMap = OSM#custom1#UAE#Persian Gulf#Middle East#Asia
| custom1 = Dubai_by_Copernicus_Sentinel-2_in_false-colour.jpg

or

| mergedMap = UAE#OSM#custom1#Persian Gulf#Middle East#Asia
| custom1 = Dubai_by_Copernicus_Sentinel-2_in_false-colour.jpg

would create a radio-button which contains OSM, pushpin and satellite maps in the order mentioned. Zoom, marker, shape and other setting of OSM is like previous.

Using radio-button, we have fewer maps in Infobox. Please discuss. Thanks, Hooman Mallahzadeh (talk) 12:40, 16 November 2025 (UTC)Reply

@Zackmann08@Joy Hi and sorry again for pinging. I think this idea would reduce much code about "onByDefault" parameter and codes such as "mapframe=yes". Additionally, it makes Infoboxes neat. Please discuss. I am a volunteer to implement that with a pretty design using interface. Hooman Mallahzadeh (talk) 06:45, 20 November 2025 (UTC)Reply
User:Hooman Mallahzadeh while I applaud the idea, I think you GREATLY underestimate how complicated such an endeavor would be. Recent work I've done with Module:Infobox mapframe has shown that the littlest change has enormous reach and affect. I don't object to the principal of what you are trying to achieve, but I am skeptical that such a feature could be implemented in an editor friendly way...
That being said, I'm 100% open to being proven wrong. My advice would be to try to create a working sandbox version of what you are talking about. A proof of concept (even if it has a few bugs in it) would go a LONG way to convincing me (and I would imagine others) that what you are describing can and should be done. Then you would definitely need an WP:RFC to enact such a major change... Just my 2 cents. Zackmann (Talk to me/What I been doing) 06:51, 20 November 2025 (UTC)Reply
@Zackmann08 You said:

I think you GREATLY underestimate how complicated such an endeavor would be.

To be honest, I believe that if Wikipedia follows "Software design patterns", and have a correct software design, then no need to worry about such coding. Even no need for much test them. Believe me! I try to create a "working sandbox version" as soon as possible. Thanks again for your response. Hooman Mallahzadeh (talk) 07:06, 20 November 2025 (UTC)Reply
To be clear, not saying it isn't possible, I just think you may find it to be more difficult than you image, but I certainly wish you luck with it! Zackmann (Talk to me/What I been doing) 07:09, 20 November 2025 (UTC)Reply
Even no need for much test them isn't a good idea, no matter how the software is designed. There are many experienced software developers, well-versed in modern software development techniques, who attest to the value of adequate testing. (Automated regression testing is a key strategy to facilitate software development.) isaacl (talk) 07:30, 20 November 2025 (UTC)Reply
Yes, you are right! Even the best codes segments might have naughty bugs that may appear 1 to 100 billion times of running. But by Software design patterns, we can reduce the testing effort so much, because they improve maintainability and reduce rigidity of code. Additionally, even when we encounter bugs, we can correct that conveniently.
This is true for this code segment also. If it is implemented well, and according to patterns, we would need much less testing than rigid codes. Hooman Mallahzadeh (talk) 07:46, 20 November 2025 (UTC)Reply
To be frank, your comments make you sound inexperienced with production software development. (And there's no need to link to the design patterns article again, and really no need to repeat your previous comments.) Testing is about ensuring the specific specifications for which a component is designed to meet are upheld. Good software design (which is often aided by following design patterns) helps ensure that changes can be more easily made in a decoupled manner. Good design will make it easier to make changes that will pass testing. It does not reduce the amount of testing required. isaacl (talk) 08:09, 20 November 2025 (UTC)Reply
@Isaacl Yes! Definitely, it "reduce the amount of testing required." If we have correct classes, then only Unit testing and an Integration testing would be required. Unit testing has been done greatly, but we need only integration testing.
In this case, I propose this scenario:
  1. Define an interface and a class for mergedMapClass
  2. Implement rendering function for mergedMapClass that is different for OSM, Pushpin and custom maps (this needs too much testing but it has been done previously, just copy and paste these rendering codes).
  3. Make a Radio-button element that recognizes mergedMapClass as the main item
  4. Force this radio-button to call different rendering functions for each text "OSM", "custom1", "UAE" etc.
And that's it! How much test would be needed? Only some mapframe settings like size may cause problem.
I think integration testing for such scenario would not be too much to finally reach a stable code. Hooman Mallahzadeh (talk) 08:31, 20 November 2025 (UTC)Reply
I don't understand why you're continuing to argue against best software development practices, when it doesn't inhibit you from proceeding. Modern software development practices have increased the amount of testing, covering more levels of the software system, and focused on automating it. isaacl (talk) 17:23, 20 November 2025 (UTC)Reply
Dear @Isaacl . I was talking about reduction of Unit testing due to reusable components provided by good design. I am trying to implement the above scenario as soon as possible. When finished, I will ping you to together test that code segment as much as we can, and also do some automated testing, because I am not familiar with that. Thanks for your idea. Hooman Mallahzadeh (talk) 11:46, 21 November 2025 (UTC)Reply
Please, no pings on this topic. It's not an area I'm interested in collaborating in. "I'm going to design this so well that less testing is needed" is an old fallacy in software development. A key reason for improving modularity is to design for testability, adding more unit testing than can be done without it. It's OK if you're not interested in gaining more understanding of software architecture, but if that's the case, I suggest not persisting in making statements that are counter to best practices. isaacl (talk) 18:18, 21 November 2025 (UTC)Reply
Ideally we shouldn't overinvest in keeping the old location maps alive, and instead fix the equivalent functionality that's supposed to exist with mapframes, cf. Template talk:Infobox mapframe#switcher zoom/center?
I do see the sense in having a generic switcher template, though, that ability might be useful in general, potentially for any sort of content and not just maps. --Joy (talk) 12:36, 20 November 2025 (UTC)Reply

@Joy: Yes, and thanks for your comment. A "generic switcher template" that is "queried simply" is very nice. For mapframe zoom switch, I propose this method for mapframe switcher:

|mergedMap = OSM1#OSM2#OSM3#Asia#UAE#Custom1
|OSM1 = {{Infobox mapframe |id=Q4948020|geomask=Q30}}
|OSM2= {{Infobox mapframe |id=Q4948020|geomask=Q100}}
|OSM3= {{Infobox mapframe |id=Q4948020|geomask=Q120}}
| custom1 = Dubai_by_Copernicus_Sentinel-2_in_false-colour.jpg

That is rendered in radio button switcher in order. Do you agree? Thanks, Hooman Mallahzadeh (talk) 14:37, 20 November 2025 (UTC)Reply

@Joy@Zackmann08 Hi again. I implemented the idea at Template:MergedMap/sandbox and if you place that template at for example Tehran article in preview mode, OSM would be an item of radio button. This template still needs more work, but because I am student, I would complete that at my free time. If someone likes to contribute in this project, I would appreciate him.
This proves the possibility of implementation of such idea. In fact, radio-buttons accept div element no matter it is OSM, Pushpin, image etc. Hooman Mallahzadeh (talk) 05:23, 25 November 2025 (UTC)Reply

@Joy and Zackmann08: Hi, and really sorry for pinging. I created Template:MergedMap and early tests were successful. Just call {{MergedMap}} in a city or place and set its mapQuery argument to something like:

{{MergedMap|OSM#Iran#Asia}}

And OSM and pushpin maps will be merged in one radio button.

Additional arguments like customMap1 to customMap10 and OSM1 to OSM10 can be set for other images map and mapframes. Like by setting these arguments:

{{MergedMap
|mapQuery = OSM#customMap1#Iran#Asia#customMap2#OSM1#OSM2
|customMap1 =Tehran 51.41504E 35.69272N.jpg
|customMap2 = Tehran_district_map_(blank).svg
|OSM1 = {{Infobox mapframe|id=Q3616|zoom = 8}}
|OSM2 = {{Infobox mapframe|id=Q84|zoom = 8}}
}}

Certaingly it is full of bugs now and I am ready for correcting them. Hooman Mallahzadeh (talk) 19:28, 25 November 2025 (UTC)Reply

I have also merged maps of Tehran article for test. Please inspect that. Thanks, Hooman Mallahzadeh (talk) 19:28, 25 November 2025 (UTC)Reply
@Joy@Zackmann08@Hike395 Hi and really sorry again for pinging. Please inspect these test cases:
  1. London
  2. Dubai
  3. Madrid
  4. Tehran
  5. Shiraz
  6. Bushehr
I think this template is now in the stable version. This method reduces the number of maps in Infobox, making it more readable and useful. If these it is Ok, please do something to make «Template:MergedMap» the preferred map of Infoboxes over Template:Mapframe and Template:Pushpin map and normal maps. Thanks again. Hooman Mallahzadeh (talk) 15:57, 26 November 2025 (UTC)Reply
@Hooman Mallahzadeh: I don't have time to dive into this right now. I looked at London and Dubai. Have to say this shows a TON of potential and thus far I am very impressed.
That being said, I would strongly advise against adding it to any more articles. Instead I would make a list of testcases at Template:MergedMap/testcases. I would also post on the talk page for {{Infobox settlement}} and give people at least a week (it is a holiday week here in the US) to really look this over.
As I said I'm thus far very impressed, but this has the potential to changes literally hundreds of thousands of pages. Let's make sure we iron out any issues before you put it in any more articles.
Good work!   Zackmann (Talk to me/What I been doing) 16:07, 26 November 2025 (UTC)Reply
My final proposal is that because this template is general-purpose, Infoboxes should customize that. At the end, users should work only with "mapQuery" and all other settings like mapframe-marker and pushpin-marker should be set automatically. Even satellite maps can be extracted form Wikidata. For example, I propose Template:Infobox Airport creates a new argument named "mapQuery", that takes values like "mapQuery = Iran#OSM#Satellite#Asia" and produces maps in its order while all setting except "mapQuery" should be done automatically. Thanks again. Hooman Mallahzadeh (talk) 07:10, 27 November 2025 (UTC)Reply
For that to be equivalent to current functionality, we need it to support passing in all the various parameters of location maps and mapframes, so there's no regression from switching.
Also, while we're at it, before moving this to production, we should also review these tags between the hashtags, as well as the naming style (camel case). I'll bring this up in new threads at Template talk:MergedMap. --Joy (talk) 12:19, 27 November 2025 (UTC)Reply

Is Wikipedia:Education program a net benefit to the encyclopedia?

edit

I just came across an article that was the subject of a student editing program and the student, who I’m certain was acting in good faith, made an absolute mess of the article due to what had to have been simple ignorance of what a Wikipedia article is and how it is written. I think I’ve brought this up before, and I remember seeing this problem way back in my earliest days as an IP editor. The noticeboard has a concerning amount of evidence showing negative impacts of the program on the quality of the encyclopedia (mostly students creating junk articles). I also seriously doubt there’s any long-term editor retention from these projects. With all that in mind, is there any positive impacts of the program for Wikipedia that justifies keeping it? Are the students getting some kind of unique benefit that couldn’t be provided any other way? Or is it just a time sink for both parties? Dronebogus (talk) 02:31, 19 November 2025 (UTC)Reply

I can definitely say that there seems to frequently be little follow-through on these classes. I've seen a number of students put the WikiEd template on the talk page of articles I've made, worked on, or watchlisted and then...nothing. I've checked back in on their userpages and on the class pages for some a year or so later and it seems like the class started setting up to do things and then just...never did? No idea what happened with the Wikipedia part of those classes. Did the teacher give up before even having them do much of anything other than making accounts and choosing articles to work on, work that then never happened? SilverserenC 02:35, 19 November 2025 (UTC)Reply
I'd imagine a fair amount of these cases stem from students signing up for a class, doing the first few homework assignments, and then dropping it right before the drop deadline a few weeks into the semester. signed, Rosguill talk 02:41, 19 November 2025 (UTC)Reply
Except it's not just the one student. Unless everyone in the class dropped it? SilverserenC 02:43, 19 November 2025 (UTC)Reply
Is there enough material for how to edit on wikipedia for folks? Especially students? The UI does take a bit to learn. User:Bluethricecreamman (Talk·Contribs) 19:06, 19 November 2025 (UTC)Reply
I had the same experience recently. They made a few small changes on Feminist views on transgender topics, put a template on the talk page, and then never responded to my questions. I wonder if there are some students who are just coasting through it or not doing the work. Katzrockso (talk) 02:42, 19 November 2025 (UTC)Reply
I mean, if I was a student in one of these projects, I probably would view it as something to get through for the sake of the assignment and not something I actually cared about. Dronebogus (talk) 02:43, 19 November 2025 (UTC)Reply
Sure, but if the programs are encouraging/permitting students to exert such minimal effort, there is your original question of whether the program is worth it. I looked through the course [7] that was involved on that page and it raised more questions than it answered.
I think this whole question is a non-starter, though, from what I understand the WMF pushes this very hard and would not take kindly to community efforts to interrupt it, but that could be an incorrect perception on my part. Katzrockso (talk) 02:50, 19 November 2025 (UTC)Reply
Why is WMF pushing this so hard? Are they getting paid or something? Dronebogus (talk) 02:51, 19 November 2025 (UTC)Reply
I think one of the ideas is that with more knowledge/experience of Wikipedia, the students might become editors later on. Katzrockso (talk) 03:01, 19 November 2025 (UTC)Reply
I’d like just one example of that ever actually happening Dronebogus (talk) 03:16, 19 November 2025 (UTC)Reply
I'm one! I originally created my account for a WikiEd course back in 2018. I didn't do much editing for a couple of years after the course ended, but then I dusted off the same account when I started getting interested in editing more seriously in 2021. I'd had some interest in Wikipedia even before taking the course in question, but I think going through WikiEd helped give me a baseline level of Wikipedia confidence that empowered me to come back later and start editing independently. ModernDayTrilobite (talkcontribs) 18:22, 21 November 2025 (UTC)Reply
I think another goal is also better understanding of Wikipedia by students. My gut feeling is that bad courses will produce (generally) bad results, good courses will produce (generally) good results. Designing/running a good course requires, among other things, the instructor (and course designer if different) understand what a Wikipedia article is, how it gets written, what talk pages are and how to use them, and at least a basic understanding of Wikipedia culture and basic policies. My experience as a trainer for Wikimedia UK has taught me that these things are not intuitive to everybody (possibly even most people), and also that (strange as it may seem to those of us here) not everybody is interested in being a Wikipedia editor - they don't care how the sausage is made.
I expect a lot of the poor outcomes are a combination of disinterested students, disinterested and/or clueless teachers and clueless course designers. We do need to be careful not to throw out the baby with the bathwater, so (to mix a metaphor) we need to somehow separate the wheat from chaff. Unfortunately I don't have any good ideas how to do that off the top of my head. Thryduulf (talk) 03:18, 19 November 2025 (UTC)Reply
[…] these things are not intuitive to everybody (possibly even most people) […] not everybody is interested in being a Wikipedia editor exactly. That’s why student editors seem to me a lose-lose for everyone since they don’t produce good work or become regular editors and I highly doubt they actually learn anything from these assignments. Dronebogus (talk) 05:22, 19 November 2025 (UTC)Reply
I don't think the WMF "pushes" this at all. They're not really involved in it, except to the extent of funding some of it (Wiki Education Foundation for the US, and chapters for almost everywhere else). WhatamIdoing (talk) 06:31, 19 November 2025 (UTC)Reply
This question strikes me as a bit odd, because I don’t see WikiEd as the source of student editors; I see it as an attempted solution to student editors. (Rather like AfC is not a source of COI editing, but an attempt to contain and address it.) I personally do think student editors writ large are a net benefit (if nothing else, we have to convince the next generation they can edit if we want some to discover they like to edit), but we don’t really control whether student editors exist. Since they do, I think the net positive of WikiEd is clear. Given the many ways a completely well-intentioned class could get themselves into trouble, it is much better to have specialized resources and staff to guide and monitor their efforts. I see it as a sign of WikiEd’s success that most student editors are harmless to the encyclopedia, and a real triumph that some make genuinely valuable contributions. ~ L 🌸 (talk) 05:11, 19 November 2025 (UTC)Reply
Most student editors are harmless to the encyclopedia um, citation needed? My evidence that student editors are generally damaging to the encyclopedia may be purely anecdotal, but you can’t refute it by saying “actually they’re generally harmless and even useful” with zero examples of constructive work done by student editors. Dronebogus (talk) 05:18, 19 November 2025 (UTC)Reply
Honestly, I agree. The idea of student editors is great, but most don't know how to write in Wikipedia's style and their teachers don't encourage them to because they have their own requirements. Shocksingularity (talk) 05:41, 19 November 2025 (UTC)Reply
Does any newbie know how to write in Wikipedia's style? Were all of your first edits perfect? Mine weren't. WhatamIdoing (talk) 06:23, 19 November 2025 (UTC)Reply
Yes, but student editors rarely if ever advance beyond newbie, and on top of that are tasked with making massive edits to articles immediately, with tight deadline, instead of starting slow and doing things at their own pace. Dronebogus (talk) 06:27, 19 November 2025 (UTC)Reply
Have you looked at the numbers in Template:Registered editors by edit count? All new editors rarely advance beyond newbie. This is why I've estimated that we need 100,000 people to go through Special:CreateAccount to replace me when I die. Student editors get further than most, but 70% new accounts don't make even the first edit, and half of the ones who do make any edit don't come back to edit on a second day.
Student editors are rarely tasked with "massive edits", and never "immediately" or with a "tight deadline". Most of them create an account in September, do WP:The Wikipedia Adventure in October, pick an article in October, write a draft in November, and (if they get that far) post it in December. The Wiki Ed Foundation has a step-by-step curriculum.
Another area in which we see a big difference is blocks:
  • Typical newbie: 15% chance of block in the year after their first edit.
  • Typical student: 0.2% chance of block in the year after their first edit.
I think the bottom line is that all newbies struggle, but student editors actually struggle less than the typical non-student newbie. WhatamIdoing (talk) 06:43, 19 November 2025 (UTC)Reply
1: How many of those new editors are trolls and people here just to goof around, vs. serious good faith editors? 2: of course students rarely get blocked, they’re here for a single purpose that doesn’t fall into any frequently blocked category (aforementioned trolls/goof-offs, POV warriors, spammers, etc.) Dronebogus (talk) 07:18, 19 November 2025 (UTC)Reply
Yes, and that's part of what makes them better newbies. WhatamIdoing (talk) 19:00, 19 November 2025 (UTC)Reply
This is a situation where confirmation bias is hard to resist, since of course you never see the harmless student editors. (Note that "harmless" is different from "positive".) The default curriculum for WikiEd guides students away from ever editing in mainspace at all (using draftspace instead), for example. Compared to the volume of programs being run, the traffic at Wikipedia:Education noticeboard is quite low, and it's a virtue that volunteer editors don't have to be on the hook to resolve the problems that do arise. As for positive examples, well, if you poke into one of the most recent problems on that noticeboard (a UC Davis class on fish), a previous student in that class went on to create multiple GAs. But confirmation bias also means that you're unlikely to know that a successful editor was originally a student, since that won't exactly be advertised alongside their good edits. ~ L 🌸 (talk) 16:07, 19 November 2025 (UTC)Reply
Cosigning @LEvalyn's comment above. There are hundreds upon hundreds of classes using Wikipedia every term. The fact that only a scant handful of them ever end up on your radar is all the evidence you need that most are not problematic. -- asilvering (talk) 11:35, 19 November 2025 (UTC)Reply
This is my view as well. Unless we have indications that WMF is expending massive amounts of resources on this, it seems appropriate to provide a channel for educators to use Wikipedia for courses in a way that is easily monitored and which gently steers people towards less problematic activity. Anecdotally, outside this program I’ve seen professors with less than 100 edits think that they’re “experienced” and ready to teach a Wikipedia course, and then react poorly when their methods are challenged by the community. In general, they seem to respond much better to steering and guidance from the WMF than volunteer editors. signed, Rosguill talk 16:25, 19 November 2025 (UTC)Reply
The Wikimedia Foundation does not run education programs. That's mostly the completely separate Wiki Education Foundation. WikiEdu gets some grant money (as of a few years ago, less than half their budget and declining) from the WMF, but I don't think they're even technically an m:affiliate. WhatamIdoing (talk) 19:03, 19 November 2025 (UTC)Reply
Wikipedia:Wiki Ed/Davidson College/Bio320 Plant Adaptations (Fall 2025): 52.5k words added and in a quick skim the quality seems legit. Aaron Liu (talk) 12:35, 19 November 2025 (UTC)Reply
Hi all, I'm LiAnna -- I am responsible for edits coming from m:Wiki Education Foundation's Wikipedia Student Program, in which we support college and university instructors in the United States and Canada to assign students to edit Wikipedia as a class assignment. There are student projects here on English Wikipedia that aren't under our auspices (instructors who don't know our support exists, instructors in other countries supported mostly by Wikimedia affiliates in those countries, some secondary/high school classes), but most are part of our program. As a few of you have noted (thanks!), our program brings about 12,000 new editors to Wikipedia each year, so while there are definitely some students who we all agree aren't producing good content, the vast majority in fact are adding value to Wikipedia. Most of us supporting this program on the Wiki Education staff are Wikipedians, and none of us would do this if we felt like it was a net-negative (or even anywhere close to that) for Wikipedia.
Let me specifically address a few points in this discussion:
  1. Each term, we onboard around 300-400 courses who are planning to teach with Wikipedia see this term's here. About 20% of these will not actually do or finish the assignment, and there's myriad reasons for this. Sometimes a class is canceled by the university; sometimes once they get into editing they realize it's more work than they have time to give, so they stop; sometimes the students are unenthusiastic and the instructor doesn't want to force students who aren't going to do good work to edit; etc. Most of the classes that do complete the assignment also have one or two students who just don't finish the assignment (this is true in most college classes for all assignments, not just the Wikipedia assignment).
  2. We provide extensive training modules for student editors to complete; the specific ones are tailored to the assignment they're given. We also offer a variety of support for instructors, in assignment design, office hours, etc., such that we do our best to ensure their plan for the course will produce the right kind of content for Wikipedia. Instructors and student editors in our program get extensive guidance and support; we've been doing this for 15 years and have a very good sense of what works and what doesn't in terms of producing good content for Wikipedia, and we steer away anyone who is not following our best practices.
  3. Of course, not every student follows directions, and some will produce bad content. While you as a Wikipedian are of course welcome to interact with student editors as you would any other new editor, we do not expect any volunteer to clean up any bad work added through our program. Instead, please feel free to leave a talk page message for or ping User:Ian (Wiki Ed) or User:Brianda (Wiki Ed), our two Wiki Experts. Ian and Brianda are both experienced editors who will jump in and make the edits necessary while communicating with the student and instructor as needed. If there's a more class-wide problem, feel free to bring it to the WP:ENB, and we will intervene with the instructor.
  4. Our organization's focus is not on retaining student editors as long-term contributors (although a handful do stick around on their own). Instead, we focus on retaining good instructors, who then bring another group of students each year. For example, review this instructor, whose behavioral ecology students have made a huge impact on species articles for a decade. But given declining youth brand awareness of Wikipedia, I do think our work is a helpful effort to (as Thryduulf says) have students better understand Wikipedia, when to use it, and when not to use it.
  5. In terms of the specific article referenced above (I Am Not Your Negro), the instructor reached out to us yesterday about this case. The instructor agreed the student editor's work had some problems, including with tone, but felt like there was some good content in there. Both the student editor and instructor were taken aback that the edit was reverted without any comment on a talk page or indication in the edit summary of what was wrong. We recommended the student post on the talk page, and then add back in more fact-based and less essay-like information in smaller edits. If there is additional problems with their contributions, please engage with the student on the talk page about specifics of what they need to do to fix it.
As always, I'm happy to answer any questions about our work. --LiAnna (Wiki Ed) (talk) 18:42, 19 November 2025 (UTC)Reply
I don’t really think “declining brand awareness” is a problem; if the top reason provided (“other sources are better”) is accurate, then that’s exactly what we want, because it’s true. Wikipedia (and all encyclopedias) are inherently inferior to other sources due to their tertiary nature. Obviously we should still improve content, but people should also be going to other, better sources anyway. We don’t need to maximize the number readers like a for-profit needs to maximize the number of customers. Dronebogus (talk) 19:09, 19 November 2025 (UTC)Reply
Tertiary sources aren't inherently inferior. They are worse for some purposes and better for others. If you want to get a quick summary of a subject is, then an encyclopedia is a better option than, e.g., original scientific journal articles.
We don't need to maximize the number of readers, but there are consequences to losing readers. One of those is that readers are the primary source of future editors. If the next generation doesn't read here, then they won't edit here either, and then Wikipedia will eventually die for lack of editors. WhatamIdoing (talk) 22:03, 19 November 2025 (UTC)Reply
Most students editors don't understand WP:NOTESSAY (for fairly obvious reasons) and WP:NPOV (most humanities courses explicitly teach people not to be neutral with regards to injustice, etc.) Also keep in mind that most students are incentivized to pass the course, not actually contribute to the encyclopedia, which results in AI use and copyright violations.
As for getting younger people to interact with Wikipedia, social media promotion should help, especially short-form video content in platforms like TikTok or Instagram. Children Will Listen (🐄 talk, 🫘 contribs) 17:27, 20 November 2025 (UTC)Reply
Some random WikiEdu contributions:
Juvenile incarceration in the United States: Fails WP:NOTESSAY, fork of Youth incarceration in the United States, WP:UNDUE content in lede (though this is mostly because it's written as an essay), broad generalizations (e.g. see #Daily life while incarcerated with https://worldschildrensprize.org/adayinthelifelockedup), unreliable sourcing (refs 2, 3). Better to delete this entirely since we already have an existing article about this topic.
Incarcerated firefighters: Has questions in lede What Do Incarcerated Firefighters Do?, cites a YouTube video. This article is mostly fine and can be fixed with some copyedits.
NoFilter: This hashtag is often misused, so it has been abandoned in recent years... fails verification, International Journal of Virtual Communities and Social Networking is a predatory journal, However, now it is used as a trick. Many people do not believe the posts that use this hashtag; research shows that this hashtag is nothing but a lie because of being heavily misused. violates WP:NPOV and not supported by sources. It is no longer as impactful as it used to be in the 2010s and #NoFilter was used as a form of resistance. The goal was to encourage people to show their true selves and get rid of the pressure to be perfect. are not sourced at all. NoFilter started as a positive thing, inspiring people to be who they are. However, people started misusing the hashtag. They still used the hashtag on images that were tweaked and deceive others. [...] As a result, #NoFilter has become a gimmick and has lost its credibility. is the last straw: since nearly all the additions fail verification, this edit should be reverted.
Wenatchee High School: Adds promotional material about the school (providing long term benefits that will help students before they graduate, ...where they help students get a jumpstart by providing many opportunities in giving them a career pathway and academic journey... while citing primary sources or none at all. The encyclopedic value of this information is low to nonexistent. Their previous edit adds stuff like The College Mentor Program is also looking for Volunteer Mentors to help serve Seniors in Wenatchee High School to help them guide towards their future paths. The program is looking students who can volunteer as Virtual Mentors, Writing Editors, Guest Speakers, or Networkers and have forms for students to fill out. Overall, these edits are a net negative and should be reverted (and I fail to understand how this article relates to "Online Communities.")
Starbucks Reserve Roastery (Seattle): Most of their edits are fine, but they have edit-warred to add a "Sustainability" section when repeatedly told it was promotional. However, I think their contributions are a net positive here. Their report makes me think they may have used some AI help for this assignment, but the resulting content looks fine.
Women in the Middle Ages: No concerns, net improvement to the article.
Victoria Spivey: The majority of the content is unsourced (e.g. Scholars also note that she helped define the themes and vocal approach of classic female blues, and her recordings continue to be discussed in studies of African American music and women's history, The Black Perspective in Music notes that her lyrics reflected everyday life and the experiences of African American women, showing both independence and emotional depth.), duplicated (Victoria Spivey was inducted into the Blues Hall of Fame in 1986.) or fails WP:NOTESSAY. Some of the content is fine (e.g. most at #Recording Career (1920s-1940s)). The unsourced sections should be selectively removed from the article.
Lemonade (2016 film): Consists entirely of plot summary changes. Plot summaries don't need sources, but content like Being underwater is a crucial environmental factor of this portion of the video, for she is attempting to get rid of the weight that now lies on her because of her relationship, hinting as the transition from denial into greater feelings of anger, the ring of fire, more specifically the image of her sitting directly in the center of it, gives watchers a sense of the trapped feelings that she experiences being stuck within the fire of her rage., shifting the attention from solely on Beyoncé to other black women and girls., etc. veers into analysis. Transitions like Off into the next chapter of healing. are unencyclopedic. I've reverted this. Children Will Listen (🐄 talk, 🫘 contribs) 19:22, 20 November 2025 (UTC)Reply
This is the big problem I see in student edits: they frequently use Wikipedia as an essay host for their obviously very amateur essays and then almost inevitably abandon this essay-cruft in articles once the assignment is done, to the detriment of readers and other editors. If I have one constructive recommendation to give for WikiEd it’s that instructors need to teach students how to write Wikipedia articles or partner with people who actually can, and grade the results accordingly. Dronebogus (talk) 22:13, 20 November 2025 (UTC)Reply
@Dronebogus, there's nothing any teacher can do to prevent the existence of poor students. Teachers can grade accordingly all they like, but that doesn't make a C-level student's work any better than C-level work. Part of teaching is that sometimes students fail. No amount of training, sternness, or support will eliminate C-level output. -- asilvering (talk) 01:49, 21 November 2025 (UTC)Reply
@Asilvering: Agreed, but C-level output doesn't reach the outside world in most courses. The WikiEdu program is definitely improving some articles; perhaps automatically blocking anyone whose grades drop below an 80 would help refine this (they can do alternative non-Wikipedia assignments instead.) I also suspect that electives, especially ones in the humanities, are more vulnerable to having such students, but I currently don't have the data to substantiate this claim. Children Will Listen (🐄 talk, 🫘 contribs) 03:22, 22 November 2025 (UTC)Reply
perhaps automatically blocking anyone whose grades drop below an 80 would help refine this what. how on earth do you expect this to be enforced? Please remember that this is the encyclopedia that anyone can edit, not the encyclopedia that only students with good grades can edit. People who aren't very good at building the encyclopedia are nevertheless part of our whole process. If any student is creating work so deranged that they need a WP:CIR block, we can simply CIR block them. -- asilvering (talk) 05:16, 22 November 2025 (UTC)Reply
@Asilvering: The problem is that most regular people won't really be making these big edits for fairly obvious reasons. Let's take me for instance, I have limited experience with content work (I'm currently working away at Grammatical tense), so I were to create a 4000 word article about something, it probably won't look good, and of course, this will change with experience. However, these student editors have to write these articles and make these edits to get class credit, and while they generally do some great work here, there are always some who do not want to put in the effort and resort to using LLMs or plagiarizing minutes before the assignment deadline. And this is not unique to the Wikipedia Education program, this happens in every education institution everywhere around the world and I'm sure we've all seen (or been) people who engage in academic misconduct at some point in our lives.
As for WP:CIR blocking, that's not possible since most of them only edit a single article, and blocking someone requires chronic/persistent behavioral issues. The damage done by these types of students is inconsequential on its own, but may eventually add up. Even the people who run the education program revert some bad edits, and volunteers revert many more (see my analysis above). Incidentally, a similar initiative did get most of its participants blocked due to excessive gaming and sockpuppetry, so I think WikiEdu is doing a much better job vetting instructors in this regard.
In hindsight, I agree that automatically blocking anyone whose grades drop below an 80 would help refine this is not a good statement to make and was a bit of a knee-jerk idea to try to minimize the negative impacts of this program. I think we all should audit student editors' contributions and see which variables affect output quality the most (institution? course? instructor? subject area? training status?) and perhaps take it from there. Children Will Listen (🐄 talk, 🫘 contribs) 06:09, 22 November 2025 (UTC)Reply
That report seems to be a student assignment, I've run into at least one course where all the students had to write/"write" something on that topic. Gnomingstuff (talk) 03:48, 22 November 2025 (UTC)Reply
Surely these reports fail WP:NOTWEBHOST. Children Will Listen (🐄 talk, 🫘 contribs) 04:03, 22 November 2025 (UTC)Reply
See also User talk:Salinafiaz § Reliable sourcing, Manual of Style, and more for another example. Children Will Listen (🐄 talk, 🫘 contribs) 17:53, 24 November 2025 (UTC)Reply
The user has proceeded to revert back to their version and add yet another vague sentence. Children Will Listen (🐄 talk, 🫘 contribs) 21:42, 26 November 2025 (UTC)Reply
@ChildrenWillListen: Reverted. You should probably just report them at this point. WP:IDHT and WP:CIR apply. Dronebogus (talk) 21:46, 26 November 2025 (UTC)Reply
I did think of leaving them alone for a bit and seeing if they come back to fix the issues, but they seemed to have abandoned the article after reverting it. They may, however, show up right before the assignment deadline like many people do with their coursework. As for WP:CIR, as I mentioned above, ANI and the like only deal with chronic behavioral problems, not student editors failing to do a good job on their first few tries.
Interestingly, Salinafaiz has completed all the Wikipedia exercises, unlike most other student editors in their class. Children Will Listen (🐄 talk, 🫘 contribs) 22:01, 26 November 2025 (UTC)Reply
I'd argue that this isn't an issue exclusive to student editors; as a generalisation, all new editors are going to need guidance on MOS and tone, and the problem is just as great (or even greater) with non-student editors. Nil🥝 19:37, 20 November 2025 (UTC)Reply
I’ve said this already, but student editors have to jump into major edits relatively quickly whereas general newbies can take as much time as they need doing incremental work or learning rules. They’re also doing it because they have to, not because they want to. Dronebogus (talk) 22:08, 20 November 2025 (UTC)Reply
If you haven't previously looked at it, you might find it interesting to examine the WikiEd trainings for yourself, or the assignments in one of the currently running courses. Personally, I think the program is appropriately incremental. ~ L 🌸 (talk) 22:47, 20 November 2025 (UTC)Reply


Apparently, Wiki Education is quite effective: Wikipedia and its little-known ally, Wiki Education, have quietly enlisted and trained more than 140,000 college students to build an army of activists Media Research Center, at your service. Gråbergs Gråa Sång (talk) 17:55, 21 November 2025 (UTC)Reply

@Gråbergs Gråa Sång: Well, yes, one of WikiEdu's stated goals is to fix our systemic bias problem, which the Media Research Center mislabels as "activism." However, it is obvious that courses like these tend to be the most problematic, since they invite essay-like social critique and undue content in articles unrelated to social theory. Children Will Listen (🐄 talk, 🫘 contribs) 03:38, 22 November 2025 (UTC)Reply
Effective at what? Owning the righties? Or building an encyclopedia? Because a bunch of anti-intellectual conspiracy theorists hating a collaboration between colleges and Wikipedia because they already hate collages and Wikipedia separately isn’t useful nor relevant analysis. Dronebogus (talk) 10:53, 23 November 2025 (UTC)Reply
Per source "building an army of activists" apparently. If you can do that with whatever Wiki Education gets in funding, it does sound pretty impressive. Gråbergs Gråa Sång (talk) 10:57, 23 November 2025 (UTC)Reply
“Building an army of activists” per a bunch of anti-intellectual conspiracy theorists. These are the sort of people who think gonzo in a dress is woke indoctrination turning kids trans. It’s nothing but scaremongering to fuel the conservative hate on Wikipedia/higher education. Provide an actual reliable source that shows Wiki Ed is doing something useful and I’ll be more receptive. Dronebogus (talk) 11:04, 23 November 2025 (UTC)Reply
If you were under the impression I was receptive to this... view of reality, that's wrong. But I think it's interesting and Wikipedians should know it exists, MRC writes this because they want people to believe it. Gråbergs Gråa Sång (talk) 11:55, 23 November 2025 (UTC)Reply
I never was under any such assumption, but MRC thinking it’s true/wanting people to believe it doesn’t mean anything. Anti-vaxxers want people to believe them and can cherry pick and exaggerate all they like, but that doesn’t somehow make vaccines as dangerous as they claim. Dronebogus (talk) 12:34, 23 November 2025 (UTC)Reply
  • It really depends on how many become editors down the line. In my experience the immediate effects on articles is usually negative. It might be better if they had to pick a stub and work on it, but a lot of them pick high level articles that are already well developed and just add stuff to be removed later. GMGtalk 18:41, 24 November 2025 (UTC)Reply
    Yes, having students develop stubs would probably solve a lot of problems. At best we get a genuine improvement; at worst the poor content is sequestered in an obscure part of the encyclopedia that wasn’t in great condition anyway. Dronebogus (talk) 21:28, 26 November 2025 (UTC)Reply
  • Using Education Noticeboard as evidence screams negativity bias. You don't hear much if the class goes smoothly. But you sure hear about them on EN, ANI, CP (copyright problems) or some other place if the class goes poorly. OhanaUnitedTalk page 19:12, 24 November 2025 (UTC)Reply
    At a certain point I think something can just create enough and serious enough problems that the benefits don’t matter. Dronebogus (talk) 06:16, 25 November 2025 (UTC)Reply
    It seems like we may just have incompatible views of WikiEd, but I do want to reiterate that WikiEd does not create undergraduates. WikiEd just creates the noticeboard where you can ask staff to deal with undergraduates for you. ~ L 🌸 (talk) 06:53, 25 November 2025 (UTC)Reply

A policy on 'Awards and recognition' sections

edit

One of my hobbyhorses here is cleaning up promotional articles, particularly of BLPs. One tell-tale sign I see frequently is an overstuffed 'Awards and recognition' or 'Awards' section, full of prizes no one has ever heard of given out by obscure webmagazines or societies. However, similar sections are often created or added to by good-faith editors, and sometimes BLPs should mention genuinely notable awards. As far as I know, there's no clear policy on these sorts of things beyond our general policies on avoiding puffery, overdetail, and trivia. This has occasionally led to editing conflicts.

I've been trying to think through a policy which could help us deal with these issues systematically. I think there are two key thing that might help:

  • Awards granted to BLPs should be mentioned only if the award is itself notable (such as a Nobel Prize or a IET Faraday Medal)
  • Except in exceptional circumstances, we should not allow standalone 'Awards and recognition' sections (similarly to how we like to avoid 'Criticism' sections). Mention of awards received should be distributed throughout the text in a sensible way, typically chronologically.

I do worry that for academics, there exist non-notable awards that are nevertheless relevant to summarizing someone's career - these things matter in academia but a lot of the prizes are pretty obscure. We might also consider mentioning awards given by notable organizations if those awards are mentioned in the org's article. Any thoughts on these suggestions? Improvements? —Ganesha811 (talk) 00:16, 20 November 2025 (UTC)Reply

I think if an award received has received coverage in a secondary source, then that's another good reason to include the award in the Wikipedia article, regardless of whether or not that particular award received is notable. Say Sally Willis receives the Jon Brandt Award for Excellence in Journalism and the Jon Brandt award is not a notable award, but in a profile of Sally Willis, The New York Times lists that award amongst her accolades, I think that would be a good reason to include the award. Or perhaps Sally Willis lives in Athens, Ohio and local press The Athens Recorder runs a story on Sally Willis receiving this non-notable award because Sally Willis is the most notable person from Athens and everyone there is super proud of her accomplishments. I think that would be another good reason to include an award in an article. I think a good start to cutting out awards is to exclude the non-notable ones that are only mentioned on the recipient's CV / other personal website and sources from the body that bestows the award (e.g. website, award ceremony documents, etc). Katzrockso (talk) 00:27, 20 November 2025 (UTC)Reply
We could make lists of awards we consider worth mentioning, like RSN. We can also make a list of fake awards that should definitely be removed. I started one over at User:Polygnotus/vanity. There are at least some awards that are notable and have an article, but are not worth mentioning (for example Superbrands). Another complication with requiring articles is that you can require a standalone article about the specific award, or an article about the organisation behind it. Awards and recognition' sections can make sense in cases like Quentin Tarantino who won like 4 trillion awards. See also List of awards and nominations received by Quentin Tarantino. Maybe an article should only be allowed to have a dedicated section for awards if you reach a certain threshold, like 10+ notable ones or if they have their own article. Polygnotus (talk) 03:38, 20 November 2025 (UTC)Reply
  Comment: Way to much policy creep. Many of the major awards in my discipline barely have a presence on Wikipedia. I've gone through the effort to get some content for the bigger ones, but unless someone interested in the topic also thinks to make a Wikipedia page for it, they will slide through the cracks. If an outside source states the award was given, and the source is reliable, why would we default to excluding it from the article? GeogSage (⚔Chat?⚔) 07:07, 20 November 2025 (UTC)Reply
@GeogSage I agree that if a truly reliable and independently written source thinks its worth mentioning then it is most likely worth including. The problem is that a lot of these claims do not have a reliable source attached, and often not even a source at all. Polygnotus (talk) 07:19, 20 November 2025 (UTC)Reply
Wikipedia:Biographies_of_living_persons#Reliable_sources: "Wikipedia's sourcing policy, Verifiability, says that all quotations and any material challenged or likely to be challenged must be attributed to a reliable, published source using an inline citation; material not meeting this standard may be removed." You could always tag [citation needed][according to whom?][additional citation(s) needed][promotional source?] if you doubt it. I write a few biographies for academics, and I try to include an award section if applicable. Generally, getting the citation isn't hard if you know they got the award, the most extensive I've done was for Waldo R. Tobler so I'll use him as an example. Some, like the Andrew McNally Award, 1986, might not have made the transition to the digital realm but are mentioned in sources discussing Tobler. In another biography I'm working on right now (not of a living person), the award was won in 1947, and I'm not even sure the awarding organization is still around. It is noted in multiple peer-reviewed publications discussing the subject though. I feel like if you see an award that isn't sourced, you can try to find it online. If you can't find a source, you can tag it or delete it with an edit summary. I don't think we need to get more complicated then that about what counts for inclusion. GeogSage (⚔Chat?⚔) 07:36, 20 November 2025 (UTC)Reply
I know for film articles, to avoid overstuffing, we only include awards that have articles here. I see no reason why the same guideline couldn't be reasonably applied to BLPs. If one feels an award is notable enough to merit inclusion but it lacks an article, they can certainly undertake the effort to write the article at that point. DonIago (talk) 07:23, 20 November 2025 (UTC)Reply
Not a lot of the big academic awards have Wikipedia pages. The biggest award in American Geography is the Anderson medal of honor, and it is mentioned on the American Association of Geographers page briefly. If we limited it to only awards on the AAG page, most of the ones the AAG issues couldn't be included. GeogSage (⚔Chat?⚔) 07:39, 20 November 2025 (UTC)Reply
@GeogSage I think a section in a larger article, or a standalone article, is both fine. I redirected Anderson medal and Anderson Medal to the appropriate section. Polygnotus (talk) 07:49, 20 November 2025 (UTC)Reply
That is an example of the biggest award in the discipline. A better might be a University Consortium for Geographic Information Science Education Award, or fellowship. Those would be a pretty big deal career wise, but the pages for those topics are abysmal. These are referenced in literature on the subjects, why would we need a Wikipedia page to mention them as well? If that is the case, the pages can be made. GeogSage (⚔Chat?⚔) 08:00, 20 November 2025 (UTC)Reply
@GeogSage I added that one as well. I agree that Wikipedia's coverage of academic awards is... not perfect. But I don't think you have to worry about us deleting awards from articles about hardworking scientists. I can't speak for Ganesha811 of course but I think they are more interested in getting rid of fake and dubious awards on promotional articles. So I think the focus is more on CEOs not academics. Although I agree that if policy is written it is a good idea to take pre-internet and academic awards into account, and treat them very differently than, for example, the Best in Biz awards you can just buy for a couple hundred dollar. Polygnotus (talk) 08:10, 20 November 2025 (UTC)Reply
My rule of thumb is that an award etc should have a decent cite, preferably secondary, but if the award or at least the org behind it has a WP-article, a primary one may be acceptable, say Grammy etc.
I think awards without WP-articles can be ok to include, if there is a decent secondary cite who bothered to notice. WP doesn't know all. Gråbergs Gråa Sång (talk) 09:29, 20 November 2025 (UTC)Reply
These sections are also common in sports articles (e.g. Michael Phelps#Honors and awards and Cathy Freeman#Awards (once I fixed it), and to pick some local examples that I've worked on, [[Bill Roycroft#Recognition and John Maclean (sportsperson)#Recognition. Ditto for music, like Luciano Pavarotti#Awards and honors, Blondie (band)#Awards and nominations, and Joan Armatrading#Honours. I agree with @GeogSage: that trying to police this area is guideline creep and could cause unintended consequences; some of the content in sections like this would disrupt the flow of pages if it was mentioned elsewhere. Graham87 (talk) 10:47, 20 November 2025 (UTC)Reply
In general, I think "Recognition" is a decent heading for this stuff. It can cover knighthoods, Grammys and "30 under 30" Time magazine lists etc. If I start an article, I always go with prose, not table, but that is a personal preference. Gråbergs Gråa Sång (talk) 11:01, 20 November 2025 (UTC)Reply
I agree that musicians, athletes and actors/actresses seem like a decent exception, in that they should probably have standalone sections called 'Recognition', 'Awards', or similar, especially if they've won major awards. But I note that the Phelps page, for instance, does seem to generally follow Proposed Rule #1 - that all the awards seem to have their own Wikipedia page, and for good reason. Pavarotti, too, has many notable awards. But does it really matter to anyone, anywhere, that he received an "Eisenhower Medallion"? Does anyone know what that is? Or that Blondie got the 2022 BBC Longshots Audience Award?
@Polygnotus is right to infer that I'm mostly concerned about businesspeople/politicians and junky "online" awards, not academics and athletes. That's where I most frequently see problems. I wonder if we could shape a policy that applies only to those BLPs. I don't think that merely requiring a secondary, "independent", source would do much, because of the proliferation of junk/slop websites that copy press releases, publish paid notices without disclosure, —Ganesha811 (talk) 12:09, 20 November 2025 (UTC)Reply
Googles AI suggests two possible medals:
People to People International (PTPI) "Eisenhower Medallion": This is the highest award given by the organization People to People International, founded by President Eisenhower in 1956 to foster global peace and understanding. Notable recipients include Mother Teresa and Congressman Emanuel Cleaver, II.
American Nuclear Society (ANS) "Dwight D. Eisenhower Medal": Established in 2014, this award recognizes outstanding leadership in public policy for nuclear science and technology, or significant contributions to nuclear nonproliferation. It is presented bi-annually and honors excellence worthy of international recognition. Gråbergs Gråa Sång (talk) 12:28, 20 November 2025 (UTC)Reply
A source that is a copy of a press release isn't independent and just clarify that the secondary source is non-promotional and it's fine. Katzrockso (talk) 12:30, 20 November 2025 (UTC)Reply
On secondary source for "prize" without WP-article, context matters. Gråbergs Gråa Sång (talk) 12:32, 20 November 2025 (UTC)Reply
It seems no extra policy is needed to avoid award-cruft although it is clearly a major issue on many pages. Secondly, many people may have a long list of awards that are notable according to our secondary sourcing and due weight policies – hence a separate section is often appropriate – whether in prose, list or table form.
That said, it would certainly be helpful to write one or multiple competing essays interpreting how our policies apply to awards. I'm happy to provide feedback on such essays. If during drafting of such an essay it turns out that our policies are in fact deficient, an RfC can be started to upgrade the essay to a policy supplement. Joe vom Titan (talk) 12:23, 29 November 2025 (UTC)Reply
edit

I just did some cleanup of Index of Belgium-related articles, but it seems to be that this is almost by definition a very incomplete, random "index" of some of the many, many articles that could be included, and as such has no real use. Making (and keeping) it complete is a Sisypus-task and would lead to a much longer page in any case. The same applies to all articles in Category:Indexes of topics by country I think. Presumably the same applies to indexes for other topics as well, but perhaps stick for now to the country ones as a first point of discussion?

Are these indexes something we should have or can they better be deleted or redirected to outlines (as sometimes happens) or to categories? Fram (talk) 10:29, 21 November 2025 (UTC)Reply

I think these should be redundant to outlines (which highlight the most important articles) or categories (which list all articles). Indices seem to lie somewhere in between and are presumably less useful than either to readers. For instance, Belgium at the 2004 Summer Olympics is listed in the Belgium index, but it doesn't list any of the many, many similar pages for other years, for no reason I can discern. Toadspike [Talk] 10:38, 21 November 2025 (UTC)Reply
Most of them are now up for discussion at Wikipedia:Articles for deletion/Index of Algeria-related articles. Fram (talk) 11:01, 25 November 2025 (UTC)Reply

MAGA civil war article

edit

Two days ago, The Guardian put out a very, very long and grand article: White nationalist Nick Fuentes is exposing a civil war among US Republicans: ‘We look like clowns’ | Republicans | The Guardian

The New York Times reported Nick Fuentes’s Rise Puts MAGA Movement in a ‘Time of Choosing’ - The New York Times

CNN wrote the article How a Holocaust Denier Sparked a MAGA Civil War - CNN One Thing - Podcast on CNN Podcasts

There are more examples like this. These sources, which are deemed by the community as reliable are all stating that this isn't just some minor thing, but a major event affecting politics.

There would not be enough room to only discuss this situation in articles such as Nick Fuentes or MAGA, neither do I think that it would be appropriate to do so, as this affects more than just one person or group.

Due to these reasons, I want to write an article about this ongoing conflict. Are there any objections, and / or suggestions for titles? How should we proceed?

Wikieditor662 (talk) 19:17, 22 November 2025 (UTC)Reply

It's not actually a "conflict" in the sense they're using the term; it's just hyperbole. Back in my day the Democrats were the ones having a "civil war". Unfortunately for your idea, sensationalist headlines don't make it a real and definite "thing" that can be the subject of an article. That's even after you can say that you're writing about something other than the fickle passings of the news cycle. GMGtalk 19:38, 22 November 2025 (UTC)Reply
GeogSage (⚔Chat?⚔) 19:40, 22 November 2025 (UTC)Reply
But the sources state that this isn't just a passing headline, but that it's a major, longstanding event.
The Guardian states The result of that interview has been a bitter and widening civil war within the American right that has exposed longstanding fissures – between conservatives and populists, Zionists and Israel skeptics, mainstream Maga right and far right – as well as revealed the extent to which a Republican party that has been flooded in recent years by extremists now seems unable to contain them, or even agree if it should. A power struggle already under way inside many rightwing organizations, people familiar told me, has now spilled into the open.
According to Ms now, They [a pro-Hitler wing of MAGA] are, in fact, rapidly defining what MAGA will mean in the years after the nearly octogenarian Trump leaves the stage.
These sources, which, again, are deemed as reliable state that is having enormous impact, that it represents long standing issues, and will quite likely affect what happens even 10 years from now. Wikieditor662 (talk) 20:00, 22 November 2025 (UTC)Reply
Maybe it will. It's happened before. But we can't really predict that based on a burst of news coverage. Don't get me wrong. You don't need anybody's permission to write an article, but the chances are probably better than even that it get's deleted, at least for the moment. GMGtalk 20:21, 22 November 2025 (UTC)Reply
How longstanding or grand does it have to be? This goes years back.
Over two years ago, Newsweek reported MAGA Divides Grow as Israel War Intensifies - Newsweek (newsweek isnt considered fully reliable but you get the point)
For a more specific example, Politico reported all the way back in January, nearly a year ago: The MAGA split over Israel - POLITICO
Wikieditor662 (talk) 20:50, 22 November 2025 (UTC)Reply
It would be more productive to improve the Republican article. None of this is happening in a vacuum; these newer developments require the context of what the party is and has done over the past fifty (at least) years. Schazjmd (talk) 21:06, 22 November 2025 (UTC)Reply
Which kindof gets to the issue of the definite "thing" you're aiming for. If the underlying topic boils down to "people in x-group disagreeing about Israel" then welcome to the club. That pretty much describes every group down to families, friends, and marriages. When the Federalists got into their spat, it materially shaped the broad trajectory of the US government. In our story today, the left is supposed to be the pro-Palestinian character, and they still don't really move the needle on actual law/policy beyond the daily headlines. GMGtalk 21:14, 22 November 2025 (UTC)Reply
Hmmm, I suppose you're right about Israel-Palestine. Perhaps instead of sensationalist headlines like "civil war", it could be titled something like internal divisions within MAGA, or internal divisions within the Republican party, and then contain all of this material?
@Schazjmd the Republican article is over 12K words; there won't be room to add all of this to that article.
Wikieditor662 (talk) 21:23, 22 November 2025 (UTC)Reply
Meh… What would be surprising is if there weren’t any internal divisions within MAGA. All political parties and (factions within parties) have them. The question is whether they have a lasting impact… and in this case it is too soon to know. Blueboar (talk) 22:03, 22 November 2025 (UTC)Reply
Maybe, but then you have a different issue all together. Are you now looking at an article topic which is so stupendously broad that it amounts to an indiscriminate list of information? What about those who support the ICE crackdown and those who see it as government overreach? What about isolationists and those who support foreign intervention? How do we even treat definite group membership as "a MAGA", instead of a loose coalition of conservatives more-or-less supportive of Trump and/or a particular slogan? Don't want to make the same mistake people do with ANTIFA and act like it comes with a membership card and monthly dues.
It's all silly hypothetical until folks show up to add content that was never intended, but technically meets the inclusion criteria. GMGtalk 22:39, 22 November 2025 (UTC)Reply
Rather than indiscriminate, the sources seem to suggest that there are two clear sides on this. The Guardian says Conservative institutions [...] are now squeezed between a strident Maga mainstream and a naked far right.
And it seems that how the two sides react to issues is also clear.
For the examples you gave, the groypers and the far right would be more approving of ICE crackdowns (and even wanting more crackdowns) and isolationism, while the mainstream right would be more cautious about these. But of course, we'd only use issues that the sources state are important to this.
As for MAGA, I see your point, perhaps this could center about something more specific, such as the GOP, right-wingism, or conservatism.
@GeogSage But again, there's no room in the bigger articles, many of them are too long already.
Wikieditor662 (talk) 23:50, 22 November 2025 (UTC)Reply
The MAGA article is only 4,777 words. GeogSage (⚔Chat?⚔) 03:11, 23 November 2025 (UTC)Reply
True, but now I'm thinking MAGA might be a problematic area to center this around, as GMG said: How do we even treat definite group membership as "a MAGA", instead of a loose coalition of conservatives more-or-less supportive of Trump and/or a particular slogan? Don't want to make the same mistake people do with ANTIFA and act like it comes with a membership card and monthly dues. Wikieditor662 (talk) 03:17, 23 November 2025 (UTC)Reply
It sounds like you're really set on making an article on this. Looking at your page statistics, it doesn't look like you have a lot of experience making them, which is fine. You don't need permission to try to put a page together. If you think it will pass Wikipedia:Verifiability and Wikipedia:Notability, then you can just make a page for it. In this case, I would be surprised if it passed Wikipedia:New pages patrol, and even if it did, suspect someone would come and merge it with something. I won't step in to stop you, don't care enough about this topic to try and execute a merge, and wouldn't be the one reviewing it, so you don't need to convince me. Just be sure to explore all options, have a pile of sources, and don't be surprised if people are not convinced. GeogSage (⚔Chat?⚔) 03:49, 23 November 2025 (UTC)Reply
Well, this isn't really about convincing you, as much as it is to figure out the best way to go forward. Are you sure it's worth it to try and make a big article, with many sources, which can take hours, just for it to get deleted? Wikieditor662 (talk) 05:12, 23 November 2025 (UTC)Reply
I mean, that is a question to ask yourself. I'm not convinced, honestly I think this warrants maybe a sentence somewhere on one of the the pages in the Timeline of the Donald Trump presidencies, and would start looking for how I could use these sources to improve existing articles before creating a whole new one. However, if you think it will pass, go for it. GeogSage (⚔Chat?⚔) 05:29, 23 November 2025 (UTC)Reply
I mean, I think that perhaps it should pass, but whether it will, or if it's worth it even though it might be deleted, that I have no idea. But yeah, perhaps it is best to take on your suggestion, and only turn it into an article if it becomes big enough. Although I'm still on the fence over what article/s to add it to. Wikieditor662 (talk) 05:33, 23 November 2025 (UTC)Reply
Help:How to mine a source might help. GeogSage (⚔Chat?⚔) 05:42, 23 November 2025 (UTC)Reply
Also, I was about to start considering adding it to the MAGA article, but the beginning said This article is about the political slogan. For the political movement associated with the slogan, see Trumpism., and the Trumpism article is over 13K words... and I don't think this would fit well in a timeline article, as it's one major event rather than multiple small events which can be put in bullet points.
Any other suggestions for article/s which this can be added to?
Wikieditor662 (talk) 15:07, 24 November 2025 (UTC)Reply
As stated, with this, I'd find another article and see if you can make a section on this topic using the sources. If that section gets big enough, then consider a split. It isn't a race. GeogSage (⚔Chat?⚔) 22:59, 22 November 2025 (UTC)Reply
See wP:10YT User:Bluethricecreamman (Talk·Contribs) 19:40, 22 November 2025 (UTC)Reply
In addition to above advice, it is in general a good idea to ignore headlines as sources of information (sometimes they help with common names etc.), as they have different editorial processes to article content. CMD (talk) 04:07, 23 November 2025 (UTC)Reply
Of course. If you look deeper into the conversation, you can also see I have specific quotes from these articles as well. Wikieditor662 (talk) 05:13, 23 November 2025 (UTC)Reply

IP talk page blanking bots, now that we have temporary accounts

edit

Three years ago, an editor got consensus to create a bot to blank all stale IP talk pages. Wikipedia:Village pump (proposals)/Archive 190#RfC: Bot to blank old IP talkpages The main reason for this was that Stale warnings and other messages will confuse legitimate new editors editing from that IP seeing it apparently directed at them

Fast forward to 2025, and we have temporary accounts; new editors will never be directed toward talk page IPs. So we don't need to worry about scaring them off.

Given that, I would like to see what the community's attitude is toward this problem now.

Personally, this post was made because I'm trying to track down a Mississippi IP editor who inserted copyright violations into articles about American TV soaps, so I can remove the copyvios. Having their talkpages easily accessible, for searching and whatnot, would be very helpful. Speaking more generally in terms of my CCI work, non-obscured accessible talk pages allow me to more easily link to previous warnings, track copyright violations that were spotted at the times, and track older socks[8][9][10][11], especially if they were duck blocked at the time but not recorded at SPI. I also only have 24 hours in each day; time spent going back to previous revisions is time I'm not spending removing problematic content. GreenLipstickLesbian💌🧸 09:35, 23 November 2025 (UTC)Reply

I support stopping the bot. It has served its purpose. Toadspike [Talk] 09:42, 23 November 2025 (UTC)Reply
I do too. Thryduulf (talk) 11:00, 23 November 2025 (UTC)Reply
+1 ~/Bunnypranav:<ping> 12:25, 23 November 2025 (UTC)Reply
I'd support stopping this. I looked quickly but maybe is faster (I'm not sure the best way to find this) to just ask if any non-blocked bot is currently performing this task? Skynxnex (talk) 12:33, 23 November 2025 (UTC)Reply
The task was inherited by User:VulpesBot (run sporadically by Dr vulpes, but they've said they plan to run it again I believe?) but I know some editors do large AWB runs to indiscriminately blank the old IP talk pages. GreenLipstickLesbian💌🧸 20:34, 23 November 2025 (UTC)Reply
Ah, thanks. Still agree we should stop blanking them at this point. (And earlier maybe would have been better.) Skynxnex (talk) 21:36, 23 November 2025 (UTC)Reply
  • Just to clarify, are we talking about stopping the bot with respect to temporary accounts? Because the bot is set to only blank pages for IPs who have not edited in over five years, there are still tens of thousands of IP talk pages identifying IP addresses. If you look at, for example, User talk pages that link to "Blueberry", there are dozens of them just on that list. BD2412 T 18:50, 23 November 2025 (UTC)Reply
    No, it is for IP talk pages only, per what I understood from GLL's example above. ~/Bunnypranav:<ping> 18:53, 23 November 2025 (UTC)Reply
    No, it's stopping it for the talk pages of IP's. There are benefits to not blanking these IP talk pages (detailed in GLL's first post), and given that no new editors will be assigned these talk pages in the future there remain almost no benefits to blanking them.
    Whether talk pages of temporary accounts should be blanked after the account expires is not something I can recall seeing anywhere and is not part of this proposal, but given that they will not be reused I can't immediately see any benefits to doing so. Thryduulf (talk) 19:41, 23 November 2025 (UTC)Reply
    I agree with Thryduulf that I see no benefit to blanking them. I do see potentially harm, however, for much the same reason. I often use the What Links Here tool to investigate, and if TA talkpages get blanked, then just like with old IPs, I am no longer able to do that. GreenLipstickLesbian💌🧸 20:42, 23 November 2025 (UTC)Reply
    I would think your use of "What Links Here" is hampered by an excess of links to IP talk pages from which no edits have come in many years, even decades. Wikipedia's purpose is not to serve as a permanent host for long-irrelevant IP talk page messages. That should be even less so when the IP talk pages no longer reflect any current account usage due to the changeover. BD2412 T 20:57, 23 November 2025 (UTC)Reply
    Interesting enough, it is not - generally if there's enough links to IP talk pages to become unusable, then there's enough links to registered account talkpages to be unusable. Removing IP talk pages just hampers my ability to look for historic disruption on lower trafficked pages, and also stops me from being able to use the search tool as effectively. GreenLipstickLesbian💌🧸 21:03, 23 November 2025 (UTC)Reply
    To be perfectly clear, the typical ancient IP talk page message has been where the IP did something like randomly add "poop" to an article once or twice in, say, 2012, got reverted with a warning, and no other edits ever came from that IP address (although I grant that most of those have already been blanked). I think we can refine the model to maintain pages where there is a possibility of copyvio involvement or the like, but I am at least dubious about the long term value of maintaining those pages. BD2412 T 21:47, 23 November 2025 (UTC)Reply
    A lot of these old accounts don't always get reverted for copyvio, they get reverted with anti-spam, anti-unsourced content, page hijacking, and really pretty every warning under the sun. Knowing at a glance that an account was editing disruptively in a topic area is still very useful. See User talk:70.49.196.202 or User talk:62.28.161.202 for examples - I just reverted a bot blanking on the first, and the other was saved because the IP got notified of an AfD late last year. Both of these editors have still open CCIs which either have been or will need to be expanded to include IP edits.
    If somebody sees an IP where the IP only made one vandal edit, got warned, and would rather blank the talkpage than fix whatever lint error they found, they could still do so manually. GreenLipstickLesbian💌🧸 22:04, 23 November 2025 (UTC)Reply
    @BD2412 VulpesBot is exclusion compliant so you can just stick {{nobots}} on User talk:70.49.196.202 if you want. Polygnotus (talk) 00:00, 24 November 2025 (UTC)Reply
    That was for me. I do a lot of IP talk page blanking outside of VulpesBot's strictures. BD2412 T 00:02, 24 November 2025 (UTC)Reply
    I agree that there's no need to hide the content of these pages, and since temp accounts only last for 90 days (under the current configuration), there's no need to ever blank those. WhatamIdoing (talk) 21:18, 23 November 2025 (UTC)Reply

Instead of showing UTC time, show the time the user is in

edit

On edits, diffs, and posts, the timestamp is always in UTC. Discord has a feature where, when you copy/view a timestamp, it displays the time according to the viewer’s local timezone. For example, if you report a post that occurred at a specific time in your timezone, another user will see the corresponding time in their own timezone, which helps avoid confusion. I believe adopting a similar feature would support the modernization of Wikipedia. Rc2barrington (talk) 02:46, 24 November 2025 (UTC)Reply

You can have that with User:Mxn/CommentsInLocalTime or WP:LOCO.
This somewhat used to be a built-in feature (m:Help:Date formatting and linking): every date was linked everywhere to automatically convert the timezone according to the user's preference at Special:Preferences#ooui-23. However, various things resulted in the feature being disabled and then removed: Wikipedia:Manual of Style/Dates and numbers#cite_ref-5. Aaron Liu (talk) 03:22, 24 November 2025 (UTC)Reply
That feature converted the format, but not the time zone. Also, if we wanted, there's a #dateformat parser function that could be used to format dates according to the user preference. But we've never wanted. Anomie 04:05, 24 November 2025 (UTC)Reply

I know this is the idea lab and we're not supposed to just support or oppose, but I can't really find a "yes and" here. I'm generally skeptical of attempts to make users see something different from what was written, even with an opt-in. Fonts and dark mode, OK, I guess, but not actually changing the text. I think that was a mistake from the beginning. --Trovatore (talk) 03:39, 24 November 2025 (UTC)Reply
The perks of living in England are that UTC is just the current time for me. (outside of summer) GarethBaloney (talk) 11:37, 24 November 2025 (UTC)Reply
For myself, I have my preferences set so that everything is set to my time zone automatically. The only thing that doesn't get converted is dates and time when I am editing the source.
Converting the time and date when I need to is a bit of a pain, but it is better for me as I can see at a glance on talk pages how long ago the last replies were, which is the most common thing I see related to time on Wikipedia.
In short, I think that what we have works. --Super Goku V (talk) 05:50, 24 November 2025 (UTC)Reply
DiscussionTools puts "Latest comment: 41 minutes ago" at the top of every talk page and each ==Section==, so you should be able to see at a glance on talk pages how long ago the last replies were no matter what your timezone settings are.
I used to set my local time at Special:Preferences#mw-prefsection-rendering-timeoffset but eventually it became too much of a hassle to keep straight which timestamp on the talk page corresponded to which edit in the page history. I find it much simpler to have the whole thing in UTC. The UTC clock gadget in Special:Preferences#mw-prefsection-gadgets-gadget-section-appearance may be helpful, if you are trying to figure out what time it is in UTC right now. (I turned that off with Vector 2022, though.) WhatamIdoing (talk) 07:18, 24 November 2025 (UTC)Reply
 
So as seen in this image I just really think it would be better to show the time I AM IN. Not the standardized UTC time. Rc2barrington (talk) 01:26, 25 November 2025 (UTC)Reply
Try the scripts I linked above. Aaron Liu (talk) 01:38, 25 November 2025 (UTC)Reply
Apparently I don't use DiscussionTools on Wikipedia, but I recall seeing something like that on other Wikis. Still I feel more comfortable seeing the exact time people made their replies rather than seeing the UTC time of when they made their comments. Besides, I don't need to convert the date and time enough to where that would be the bigger hassle. (And yes, I have the UTC clock in the upper-right corner just to keep myself aware of it.) --Super Goku V (talk) 05:56, 30 November 2025 (UTC)Reply

Potential expansion of CSD G15

edit

Hello all. In order to align CSD G15 with the newly-accepted guideline WP:NEWLLM, I've created a topic on the CSD talk page as a place for RFCBEFORE workshopping of a potential broadening of G15 to encompass all primarily AI-generated articles, whether or not they've been reviewed by a human. See Wikipedia talk:Speedy deletion#Broaden G15 to align with new guideline and please weigh in if you're interested. First and foremost whether you think it should be expanded to align with the new guideline at all, and if so any suggestions you might have for what should be changed or added to the wording of G15. Athanelar (talk) 16:14, 24 November 2025 (UTC)Reply

User Configured Content Warnings

edit

I know that Wikipedia is not censored and we don't force content warnings, but maybe some people don't want to suddenly see things like gore/nudity without a clear warning. Maybe people can choose what they do or do not want to see in their settings? VicAsksWhy (talk) 17:02, 25 November 2025 (UTC)Reply

Even if that was feasible (are we going to have a setting for spiders? I don't want to see spiders), most readers don't have accounts so they can't set any preferences. Schazjmd (talk) 18:19, 25 November 2025 (UTC)Reply
Yeah, the thing about the accounts is a good point. I was thinking fairly broad catagories; it's definitely impossible to cover every phobia to ever exist. VicAsksWhy (talk) 23:06, 25 November 2025 (UTC)Reply
It's been brought up before, sorry I don't have a link to the most recent discussion but as I recall, the idea was to default-hide that type of image so the reader would have to select to show them. There was no consensus for the suggestion. Schazjmd (talk) 23:40, 25 November 2025 (UTC)Reply
If someone does track down major discussions on the topic, they should add them to WP:PEREN#Censor offensive images. Anomie 23:44, 25 November 2025 (UTC)Reply
Alright, got it. Just a little idea I had. VicAsksWhy (talk) 23:45, 25 November 2025 (UTC)Reply
@VicAsksWhy, you couldn't know that it's been brought up before. Check out Anomie's link to WP:PEREN, there's a lot of interesting stuff in there. Schazjmd (talk) 23:53, 25 November 2025 (UTC)Reply
Alright, I'll be sure to check that out. Thanks! VicAsksWhy (talk) 23:56, 25 November 2025 (UTC)Reply
In 2010, the recommended categories were sex/nudity, violence, religious, and gore/disgusting content. In 2011, the Image filter referendum resulted in the project being cancelled. Examples of the four categories that were mentioned during the discussion included stills from notable porn movies, a photo of a professional fighter gouging out his opponent's eye with his thumb, drawings of Muhammad, and various illustrations in medical articles (e.g., the lead image of Smallpox). WhatamIdoing (talk) 22:24, 26 November 2025 (UTC)Reply
this had me thinking, why doesn't wikipedia have any settings for non logged in users, most other sites have settings for people who are not logged in and those who don't usually don't even allow for access of they're site unless your logged in, many settings don't seem like they inherently require or should require a account to use like appearance and search, is there something im missing? if not maybe we should start a discussion on this. Misterpotatoman (talk) 07:23, 29 November 2025 (UTC)Reply
The general approach used by websites to save settings for users without accounts is to save the settings in the user's browser (either in a cookie or in local storage), and then either return the cookie information for the server to process, or use Javascript in the browser to change the page accordingly. A cookie-based approach isn't cache-friendly, so more servers are needed to handle the workload. At the scale of Wikipedia's readership, the required resources add up quickly. Using Javascript either causes visible changes to a page after loading, or requires the page to wait to finish its Javascript processing before rendering the page, reducing responsiveness to the reader. There are tradeoffs with each approach, and so far, the community and development team prefer the tradeoff of mostly not having settings for non-logged in users. (Vector 2022 introduced some settings related to its layout.) Specifically for content filtering, since this would generally be something a user would want for all websites, it would be more effective to manage within the user's viewing device or personal network. isaacl (talk) 08:31, 29 November 2025 (UTC)Reply
thanks! i didn't know that. Misterpotatoman (talk) 08:36, 29 November 2025 (UTC)Reply
Wait five years; some AI company will come out with automated content filters to hide nudity, spiders, or whatever you want. Of course, all at the expense of privacy... Anne drew (talk · contribs) 01:57, 26 November 2025 (UTC)Reply
@VicAsksWhy: There's always Help:Options to hide an image. JJPMaster (she/they) 02:06, 26 November 2025 (UTC)Reply

Mass-reverting AI serial abusers

edit

If someone has repeatedly used an LLM without adequate verification of its output, I think we should be able to mass-revert their edits. I envisage a system whereby we only have to glance over each edit and check it is AI-generated, rather than the much higher bar of reverting the cases where the AI has caused a definite problem. My rationale is that if someone has repeatedly failed to use AI responsibly, then their other uses can be assumed to be irresponsible as well. Roughly speaking, I imagine the level of abuse required being roughly the current threshold for a dedicated subpage of the AI cleanup noticeboard. It has been remarked on numerous occasions that checking whether AI output is inclusion-worthy is about as hard as writing the material from scratch, so I think requiring other users to perform this level of checking before reverting AI edits is not reasonable. What do people think? lp0 on fire () 22:03, 26 November 2025 (UTC)Reply

Are we talking about a blocked user? Was there a discussion about their behavior? I could imagine forming a consensus to Wikipedia:Rollback all of an individual's edits, but I'm not sure that I'd recommend that an individual editor unilaterally declare that everything you did in the mainspace is definitely AI and should all be reverted.
Also, outside the mainspace, it's a bit more complicated. If an AI-generated comment on a talk page received a reply, it probably shouldn't be reverted. WhatamIdoing (talk) 23:42, 26 November 2025 (UTC)Reply
IDK if a tool like this is a good idea, but if it did exist I'd envision it being used for blocked editors (look up the user whirlingmerc for an example that wasted hours of my time). For editors who have not been blocked, it's appropriate to ask them to clean up their own mess by self-reverting all the problematic contributions. -- LWG talk 01:09, 27 November 2025 (UTC)Reply
I think it certainly applies to talk pages, per wall of text issues. All AI edits should be deleted, per my comment below. Yesterday, all my dreams... (talk) 14:54, 27 November 2025 (UTC)Reply
I agree that if an editor has been blocked for using AI, reverting any of their edits that look like AI output should be allowed. This sounds like presumptive deletion in copyright cleanup. I don't think we need a special tool for this though. Toadspike [Talk] 07:23, 27 November 2025 (UTC)Reply
That presumptive deletion is exactly the idea I was going for. I wasn't suggesting a special tool, but I think mirroring the wording there pretty much exactly could save a lot of time (i.e. not requiring that the user be blocked). If someone does a long spree of AI additions but leaves the project before anyone notices, there's no need to block them, but being allowed to mass-revert their mainspace edits would still be helpful. lp0 on fire () 07:45, 27 November 2025 (UTC)Reply
I agree, and think to succeed you need to invent a name for it, say "vagabond AI editor" reverts. I think this is important because the trend is the increase in AI edits. And I think it should also apply to talk pages given wall of text issues. AI edits are the termite that can ruin Wikipedia. Yesterday, all my dreams... (talk) 14:50, 27 November 2025 (UTC)Reply
I don't see why we can't just call it presumptive deletion. For talk pages, we have {{aitop}}/{{aibottom}} already and I think that's enough. lp0 on fire () 15:12, 27 November 2025 (UTC)Reply
Or we could make something similar to Template:Single-purpose account, except instead of saying:

Example (talkcontribs) has made few or no other edits outside this topic.

for AI use, it could say something like:

WhatamIdoing believes that this comment was written by generative AI instead of by Example (talkcontribs).

WhatamIdoing (talk) 20:49, 27 November 2025 (UTC)Reply
Yesterday, I'm not convinced with your view. In fact, you're rapidly making me less supportive of this whole idea. It begins to feel like this:
  • We should revert everything.
    • Maybe not talk page comments, if someone's already replied.
  • No, really, everything, because it's a Wikipedia:Wall of text.
    • Even if it's just a short reply?
  • Really, everything, because everything is a Wikipedia:Wall of text.
You obviously loathe AI use, which is fine. But what if the comment is not a wall of text? Would you seriously recommend reverting a one-word reply because a single word is "a wall of text"? How would you even know whether such a short comment used AI?
Would reverting a talk-page comment actually help anyone? WP:REDACT says usually no, particularly if someone's already replied. Would it be better than alternatives such as striking (like we do with socks), hatting (e.g., aitop/aibottom), labeling (like we do for WP:SPAs), or archiving? I doubt it.
I wonder whether your ham-fisted recommendation signals that you're getting burned out. If editing feels like a sisphyean struggle against the forces of spam and stupidity, then you might try to find a way to contribute that feels fun and/or effective. WhatamIdoing (talk) 20:45, 27 November 2025 (UTC)Reply
Well, you know that our agreement rate is pretty low. But that is the nature of free speech. As for "forces of spam and stupidity" being in full swing on many pages, we actually agree on that. And I assume you are also thinking of my talk comment on fuzzy concept. On that page OR and stupidity are in full swing indeed. We can not have a "respectable" encyclopedia with that type of content. Yesterday, all my dreams... (talk) 00:44, 28 November 2025 (UTC)Reply
I have spent no time looking at your comments on talk pages, so no, I had no idea that you posted a comment there (that says nothing about AI use). WhatamIdoing (talk) 04:04, 28 November 2025 (UTC)Reply
I've been thinking about this sort of thing as well. Regardless of the approach we end up taking, we do need to be more proactive in removing unverified AI content and quickly putting a stop to people who add it. Thebiguglyalien (talk) 🛸 04:57, 28 November 2025 (UTC)Reply
Agreed. A quick look at the AI cleanup noticeboard will make it abundantly clear how serious a problem this is. As I see it, there are three levels of assuming good faith we could exercise when doing the cleanup (clarifying what I mean here because I think there was some confusion above; sorry in advance for the wall of text).
  1. If someone has repeatedly misused LLMs, we go through their contributions and delete anything that violates policy (weasel/peacock words, OR, hallucinations, &c.) but we can't revert anything until we've identified the problem. This might involve verifying sources and/or translations, might require specialised knowledge, and is about as difficult as writing the content from scratch. This is the current standard, and it makes cleaning up after LLM use unreasonably difficult, leading to a growing backlog of additions to Wikipedia that might be nonsense.
  2. Like copyright violations, any mainspace edits by an AI abuser can be reverted indiscriminately. This would make cleaning up after AI misuse very easy (although, given how easy it is to write content with AI, this might still not be enough).
  3. What I was originally suggesting was a middle ground: if someone has repeatedly misused LLMs, then any edit of theirs that looks AI-generated can be reverted without proof that the AI has hallucinated or otherwise violated policy, because they are presumed incompetent. This would still make cleanup much easier than in currently is, with reduced risk of undoing good contributions.
lp0 on fire () 07:41, 28 November 2025 (UTC)Reply
Sockpuppet cleanup allows other users to restore sock edits if they are positive (every now and then some are, or partially are), without putting that burden on the cleanup. CMD (talk) 09:13, 28 November 2025 (UTC)Reply
I don’t think it’s a matter of LLM or not LLM; it’s a matter of good editors and bad ones. There were plenty of bad editors who tried to push bad articles before LLM. The fairest way to approach low-quality articles is the same way it has always been done: with tags that can only be removed if an editor has done the necessary work to justify their removal.
We can’t allow LLM to become a reason for people to ban whoever they want, for whatever reason. Take a contentious subject, for example: an editor could be falsely accused of using an LLM in order to censor their vote on articles. Orlando Davis (talk) 15:53, 28 November 2025 (UTC)Reply
Instead of deleting the articles, we can have a 3 strike policy where you get banned for 24 hours if you have 3 strikes, and are banned permanently after enough strikes without an attempt to change your behavior. Orlando Davis (talk) 16:29, 28 November 2025 (UTC)Reply
The difference is that LLMs allow people to churn out huge amounts of bad content extremely quickly without first having to learn how Wikipedia works, which makes it significantly more disruptive than just "bad editors".
I don't think your worries about false accusations make sense. If anyone tried to censor someone by accusing them of using AI, then much like accusing someone of being a sock, that would be highly problematic and likely lead to the accuser being blocked (especially in a contentious topic); however, it's much easier to spot a bad-faith accusation of AI than a bad-faith accusation of sockpuppetry.
Your suggestion of "get banned if you have enough strikes" (I assume you mean blocked not banned) doesn't sound substantially different from the standard system of "you get blocked if you keep doing stuff wrong after being warned" and indeed the template {{uw-ai1}} through {{uw-ai4}} exist for this very purpose.
I think you may have misunderstood the purpose of this proposal: it's not for dealing with people who disrupt the project using AI but rather for cleaning up their edits, which otherwise demands an unreasonable amount of time from the users doing the cleanup. lp0 on fire () 16:43, 28 November 2025 (UTC)Reply
Couldn’t a way to reduce backlog be to put a cap on how many articles and edits a user can perform per day, to give reviewers enough time to keep up? For example, a 1–2 article per day limit and a 100–200 edits per day limit. What do other editors think? Orlando Davis (talk) 17:09, 28 November 2025 (UTC)Reply
That sounds way out of scope for this issue. Bear in mind most a lot of AI cleanup involves cleaning up after editors who stopped before (or when) they were noticed, so such a filter would have to apply to all users. I also note that 100 edits a day isn't very much for normal editing, but it's a huge amount of work to clean up after 100 edits of AI drivel. For example, see Wikipedia:WikiProject AI Cleanup/Noticeboard/2025-09-17 Thefallguy2025 which is from early September and still less than half done. lp0 on fire () 17:25, 28 November 2025 (UTC)Reply
What about the cap on edits being applied more strictly to flagged users? Orlando Davis (talk) 17:41, 28 November 2025 (UTC)Reply
Or to newbies. Very few brand-new accounts make even five edits on the first day. WhatamIdoing (talk) 01:24, 29 November 2025 (UTC)Reply
To the extent that new accounts do, they're usually people who have made accounts before (sockpuppets, WP:CLEANSTART) Katzrockso (talk) 01:28, 29 November 2025 (UTC)Reply
So, #3 is what we've been doing at WP:AINB since around August and it has been working just fine, albeit without any PAG to justify... we typically leave an edit summary like "LLM cleanup, as discussed at AINB and/or ANI". I personally have cleaned ~500 articles in this way and only on one of those articles did someone else complain, and I just reverted my deletion and asked that user to verify/fix the article, which they did. Also agreed with Toadspike that it would be a rare case where a tool would be helpful. In almost all cases this has to be done manually. NicheSports (talk) 19:45, 28 November 2025 (UTC)Reply
Oh, that's encouraging I suppose. It would still be nice to formalize it in a guideline (or at minimum a WikiProject advice page), for the combination of legitimacy and clarity that we get from explicitly writing stuff down. lp0 on fire () 23:05, 28 November 2025 (UTC)Reply
I feel like we can just use the general provisions of WP:CHALLENGE etc if it's the usual AI stuff and the sources don't verify. Alpha3031 (tc) 23:50, 28 November 2025 (UTC)Reply
Also, WP:5P3 exists. I don't really know why this is even a discussion to be honest. Text can be added, changed, or removed at any time, that's the fundamental point of a wiki. Gnomingstuff (talk) 01:15, 30 November 2025 (UTC)Reply
Good idea, any chance you want to give it a whirl? Maybe makes sense to start as an advice page at WP:AIC. Also pointing you to this, which is an idea I had with some support at AIC: WT:WikiProject AI Cleanup/Archive 4 § Guidance on handling article with mostly minor edits subsequent to LLM-rewrite. Maybe this could be incorporated? NicheSports (talk) 21:16, 29 November 2025 (UTC)Reply

I'm inclined to agree that the community is currently fairly vigorously contesting LLM-slop. There are even false positives, at least one case of something from 2010 getting tagged. Remember that LLMs are trained on Wikipedia. Nobody tagged me for this but I recently saw text I had written where I used "fostered" and "surpassed," two tagged vocab words, but on double-checking both of which were used by the sources, so I was being faithful by also using them. Shlomo Lambroza [Wikidata] and Diana Dumitru probably didn't use an LLM, they used that vocab because they with precise diction decided that "surpassed" and "fostered" were the best way to express themselves at that moment. Not saying that the slop isn't a big problem but right now I think there is adequate control of it - thanks to a lot of volunteer work, time, energy. See, I did 3 things. But I remember someone telling me about the rule of 3 at least 5 years ago and it had nothing to do with LLMs. Andre🚐 02:08, 29 November 2025 (UTC)Reply

To be clear, I'm not proposing that anyone can delete anything they personally think might have been written by an LLM, but in cases where a user has a long history of LLM misuse, it feels unlikely that they also just happen to write like an LLM. I don't necessarily agree with you that enough is being done to clean up after LLMs to avoid needing a measure like this, but rven if that's true, such cleanup still wastes a huge amount of community time. The current wording of WP:ONUS means that if a source has been provided, it's the responsibility of the person removing information to check that verification fails. The thing about AI is it's very easy to make something that looks convincing, meaning one often can't tell at a glance whether the sources are okay. This creates a WP:TNT situation where it's easier to blow it up and start over than to fix the problems by manually checking each source, which can take a very long time. lp0 on fire () 13:01, 29 November 2025 (UTC)Reply
That makes sense. But isn't it pretty easy to make something look convincing without AI? Shouldn't we use a system of cleaning up that isn't so confrontational? Couldn't erasing pages start edit wars? There have been very good alternative suggestions here. Orlando Davis (talk) 20:31, 29 November 2025 (UTC)Reply
It's not true that WP:ONUS means that if a source has been provided, it's the responsibility of the person removing information to check that verification fails. WP:BURDEN means the other editor has to provide one source (but only one; you can't make them WP:FETCH and endless supply of sources). WP:ONUS says only that it's the other guy who has to organize a consensus to include the information.
One of the footnotes in BURDEN gives a partial list of reasons why one might be justified in removing cited content: removing editors "must articulate specific problems that would justify its exclusion from Wikipedia (e.g., why the source is unreliable; the source does not support the claim; undue emphasis; unencyclopedic content; etc.)". In practice, I suspect that an edit summary along the lines of "Presumptive removal of text from an editor since blocked for abusing AI tools" would be considered an entirely sufficient articulation of a specific problem. WhatamIdoing (talk) 21:46, 29 November 2025 (UTC)Reply
That was my failure to read the footnote; thanks for clarifying. I still think it'd be helpful to formalize allowing such presumptive deletions. lp0 on fire () 22:09, 29 November 2025 (UTC)Reply
It might be useful to have a short page on when and why a Wikipedia:Presumptive removal would be warranted. If it gets used and doesn't create a lot of problems, it would probably be easy to get an "Oh BTW there's this WP:PRESRM thing..." added to a guideline or policy somewhere. WhatamIdoing (talk) 23:25, 29 November 2025 (UTC)Reply
To be clear, are you suggesting a single page that collates all the common kinds of presumptive removal (AI, socks, copyvios, banrevert, arbecp, maybe something else I haven't thought of)? lp0 on fire () 09:11, 30 November 2025 (UTC)Reply
Yes.
I'm thinking of something that's more of a 'process description' page than a 'rulebook'. It could be a little bit similar to Wikipedia:Why was the page I created deleted? or Wikipedia:What is significant coverage? After someone reads it, they should know what presumptive removal is (mass removal of edits from known-problematic individuals), why we use it (efficiently protecting Wikipedia), and what to do (careful evaluation). WhatamIdoing (talk) 23:45, 30 November 2025 (UTC)Reply
It may be relevant to this discussion that Orlando Davis has been temp-blocked following an ANI report concerning disruptive editing and LLM use. fifteen thousand two hundred twenty four (talk) 02:34, 1 December 2025 (UTC)Reply

Wikipedia app

edit

In the Wikipedia app, the English Wikipedia doesn't show whether an article is Good or Featured. For example, in the German Wikipedia—like this good article—this information appears at the bottom of the article in the app, and it even shows the date when the article was selected as Featured. I strongly suggest adding this feature—and the date of selection—to the English Wikipedia app as well. Vastmajority20025 (talk) 19:37, 28 November 2025 (UTC)Reply

Last I heard, readers don't notice or care about those little icons, so why should we bother? WhatamIdoing (talk) 21:47, 29 November 2025 (UTC)Reply

Wikipedia as a human-written encyclopedia

edit

I'm opening this as a more general idea lab discussion since I don't have a specific proposal, but we've reached the point now where we really need to be looking into how we frame Wikipedia's relationship with AI, especially in public-facing areas. There's currently nothing public-facing, not even on the main page, emphasizing that Wikipedia is a human-written encyclopedia (or whatever term you want to use). As LLM content only becomes more common, the fact that Wikipedia is written by humans is going to become one of its defining characteristics and a major reason why it's a better alternative to other sites. Has anyone given thought to how we might incorporate this? Thebiguglyalien (talk) 🛸 02:57, 29 November 2025 (UTC)Reply

I do think Wikipedia has always had a human and humanistic aspect, and I support the proposal in the abstract. Maybe we could have a contest for someone to design a banner or an interactive display to promote Wikipedia: The Free as in Libre, Human Encyclopedia. Like we used to do in the old days. Andre🚐 03:02, 29 November 2025 (UTC)Reply
Awful suggestion. 1. Being human-written is not an important pillar of Wikipedia, it is rather the bare minimum for any respectable encyclopedia, book or news article. Hence it's a bad idea to emphasive this fact so prominently. 2. Wikipedia is not "human". That particular phrasing is confusing.
I don't object to including the fact that Wikipedia is human-written in some guidelines, essays or promotions. But it's not the central selling-point of Wikipedia – lots of other outlets are human-written too but inferior to Wikipedia in many ways (e.g. less reliable). Joe vom Titan (talk) 13:17, 29 November 2025 (UTC)Reply
I have some bad news for you about the internet of the 2020s. Thebiguglyalien (talk) 🛸 15:41, 29 November 2025 (UTC)Reply
What are those bad news? Has AI slop appeared on nytimes.com or home.cern yet? AI is neither the biggest problem in the world nor the biggest problem on the internet. For one, misinformation spread by oil companies, oligarchs and petrostates to serve their own interests is much more insidious. Joe vom Titan (talk) 13:44, 30 November 2025 (UTC)Reply
Even more bad news—The list (misinformation spread by oil companies, oligarchs and petrostates) includes states, x-archs... that have lots of cash they crave to grow—what better way to get richer than AI (restricted by very high subscription fees). $20USD/mon is my limit. What's Bezos'? Oh, right, Amazon is one of the three largests investors in AI—looked at or listened to the A. website lately? — Neonorange (talk to Phil) (he, they) 03:32, 1 December 2025 (UTC)Reply
'Language-independent articles'? How has the world become so dystopic? Each language has its own mode of communication, its own mode of thinking. There is no one-to-one relationship between a concept in one language and a concept in any other. Even if we could modify language to allow for such things, this would destroy the organic diversity that is the body of human language. God knows I don't want to read an article that is written in a manner inconsistent with the thought process that is associated with the language in which it is written. I can only imagine the horrible damage this will do to languages other than English. Haven't we done enough harm with the likes of the Scots Wikipedia? Yours, &c. RGloucester 06:05, 29 November 2025 (UTC)Reply
On the other hand, there are quite a few articles that exist in fr, de, etc and nobody has created in en. Google Translate does ok, but affects ease of discovering information and browseability. So if we had a way to conceptualize a layer between factoids and prose, it could be useful to aid in translation or spreading knowledge further and sooner. At any rate, this is only theoretical. If and when it is accomplished, it may or may not even achieve critical mass. Andre🚐 06:12, 29 November 2025 (UTC)Reply
Our goal is not to have more articles for the sake of more articles, but to have articles that meet our quality standards. Usually, there is a reason why an article may exist on a non-English Wikipedia, but not on the English Wikipedia. The English Wikipedia has much higher standards in terms of referencing. Very often, articles found on other Wikipedias lack sources at all, or rely heavily on niche sources that would be insufficient to establish notability here. Additionally, they are frequently written from a perspective that is insufficiently global for the English Wikipedia. I have many times endeavoured to translate an article from one Wikipedia to another, in the languages that I know, only to be stymied by the poor quality of the content. It is often easier to start a new English Wikipedia article from scratch, using some of the sources from the other Wikipedia as a foundation. Yours, &c. RGloucester 06:19, 29 November 2025 (UTC)Reply
Not necessarily always the case. There are many good quality articles on fr or de that if I could snap my fingers to port over with an idiom-proof translation would be worthwhile in edifying readers, and have appropriate references. Andre🚐 06:28, 29 November 2025 (UTC)Reply
Ask a translator for assistance, there are plenty of volunteers willing to help. No translation can be 'idiom-proof', unless the fundamentals of language itself are to be destroyed. Yours, &c. RGloucester 07:21, 29 November 2025 (UTC)Reply
(I wouldn't use the German-language Wikipedia as an example of appropriately cited articles, as their standards are very different from ours.) WhatamIdoing (talk) 21:49, 29 November 2025 (UTC)Reply
I am aware that a human translation can't be idiom-proof, but that is the promise of an abstract Wikipedia, a syntactically complete database-frontend of facts that takes Wikidata beyond simply data and makes actual articles. I mean another way to do that would just be to feed Wikidata to an LLM that doesn't have other knowledge or the ability to call out to random tools and make things up, but simply weaves Wikidata into article form. That wouldn't work though without a lot more UX work and volunteer time on data entry. At any rate, I don't necessarily think the articles I'm personally interested in are the ones that translators need to work on, so it kind of feels like an imposition to dump my requests into that list. I'm sure there's a backlog. Instead, I'm dumping them into Wikiprojects that will potentially have a contributor write an English article while just consulting the other articles. But I do know that there are many many topics that are adequately covered in international Wikipedias. It seems silly to ignore the possible technological developments that will make reading content in other languages more accessible. Here's an example: Mikhail Kulisher [he; ru; uk]. The articles seem fairly complete and are referenced. There is a whole pile of similar articles. Andre🚐 05:59, 1 December 2025 (UTC)Reply
Your claim that There is no one-to-one relationship between a concept in one language and a concept in any other sounds a bit overstated. Simple facts (Angela Merkel was Chancellor of Germany; calculus is a type of mathematics; carrots are edible) seem to translate quite well between most languages. There are individual instances of non-translation (家は青い – the house is, um, blue or green or thereabouts; ), but it's not true that there are no concepts that map to the same concept in any other language. WhatamIdoing (talk) 22:10, 29 November 2025 (UTC)Reply
I said that there is no 'one-to-one' relationship, not that there was no relationship. The process of translation is a delicate one. What you call a 'simple fact' could potentially be translated tens of different ways. The meaning of 'edible' can be rendered many ways in English, and it is likewise true in most other languages. I could say 'can be eaten', 'able to be consumed', 'safe to eat', 'comestible', depending on context, register, &c. By creating an artificial one-to-one relationship between words, whereby 'edible' can only be rendered as one specific term in another language, you destroy the organic diversity of that language, and the naturalness of the text produced. It is very likely that whatever term is chosen may end up being inappropriate in the relevant context, because the person creating this artificial one-to-one relationship will not have a full grasp of the relevant language, and will rely on horrible dictionaries or computer code. The end result will be Scots or Greenlandic Wikipedia, redux. Yours, &c. RGloucester 07:51, 30 November 2025 (UTC)Reply
And yet, somehow, I think that if it offered me a sentence like "carrots are edible[source]", and I didn't think it was appropriate in the relevant context, had the wrong register, etc., then I could probably either reject it or re-write it without destroying either the organic diversity of the English language or the naturalness of the text in the Wikipedia article. WhatamIdoing (talk) 23:49, 30 November 2025 (UTC)Reply
Sure, if you're a speaker of English and a speaker of the source language, you will be able to evaluate whether the machine's output is suitable or not, though I don't see how this will save any time as compared with traditional translation. However, I expect that this 'abstract Wikipedia' will mainly be used for minor languages, with few available editors qualified to make such judgements. It is a recipe for disaster. Yours, &c. RGloucester 11:05, 1 December 2025 (UTC)Reply
I'm a native Anglophone, and I wrote poetry in Hebrew that I had trouble translating. user:RGloucester is absolutely right that there are things that don't translate well. "Traduttore, traditore" -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:30, 1 December 2025 (UTC)Reply
@Chipmunkdavis, please see m:Abstract Wikipedia/Abstract Wikipedia naming contest. I gather that the team would very much like to have a different name (though I don't have any insight into why). WhatamIdoing (talk) 21:52, 29 November 2025 (UTC)Reply
I was pretty sure that I had proposed Wikigenerator, but I guess great minds think alike. Andre🚐 21:58, 29 November 2025 (UTC)Reply
  • It is fair to exclude LLM written content from Wikipedia on the grounds that they're currently not very competent at the task of writing an encyclopedia article, but I am opposed to any display of human or "humanistic" chauvinism, specially anywhere as prominent as the front page. It is also not practical to uphold this claim/promise, as it basically impossible to be certain whether any text is "really human" or has had a partial/full LLM contribution behind it. TryKid[dubiousdiscuss] 14:48, 29 November 2025 (UTC)Reply
Seconded. The LLM text is more prevalent than some people realize, and certainly more than laypeople realize. Making such a claim after 2 years of having no AI policy or guidelines would be telling our readers a lie. Gnomingstuff (talk) 05:16, 30 November 2025 (UTC)Reply
I agree on all counts. LLM is both unsuitable for writing new articles, but it's also not outright banned by policy (at least not yet). Even if it were banned, there are still articles out there that have been written partially using LLM.
We could theoretically ban any LLM use, but that still wouldn't make the statement "Wikipedia is entirely human-written" true. – Epicgenius (talk) 23:53, 30 November 2025 (UTC)Reply
Gnomingstuff & Epicgenius, I don't know if you're referring to this or if you haven't seen it yet, but as of last week there is in fact a content guideline in regard to creating articles with LLMs, and there are ongoing discussions to decide how its scope will be expanded beyond simple article creation: Wikipedia:Writing articles with large language models. Thebiguglyalien (talk) 🛸 06:04, 1 December 2025 (UTC)Reply
@Thebiguglyalien, thanks for the ping. I did see this, but it doesn't apply retroactively, nor does it cover LLM-assisted expansions of existing articles. We'd need to ban LLM for at least the latter before we can claim that WP is human-written (and even then, people will try to sneak in LLM text constantly, so vigilance will be required). Epicgenius (talk) 06:44, 1 December 2025 (UTC)Reply
To clarify, when I said "2 years" I meant the prior 2+ years' worth of accumulated AI edits. (The guideline was approved just days before the 3-year anniversary of ChatGPT.) Gnomingstuff (talk) 13:47, 3 December 2025 (UTC)Reply
The Wikimedia Foundation itself seems to be much less wary of generative AI, using it in some of their TikTok videos (one on Wicked (film), if I do recall) and advertising in their 25th anniversary video how Wikipedia trains AI. If there is a community consensus that Wikipedia and generative AI are not allies, should we address this with Foundation leaders so they can alter their messaging? ✨ΩmegaMantis✨blather 20:19, 1 December 2025 (UTC)Reply

Okay, if for a moment we were to ignore the ideas where we welcome and accept AI content as part of Wikipedia's identity, what could we hypothetically do as a project to make it clear what separates reading Wikipedia from things like asking ChatGPT, searching Grokipedia, or using the Google AI Overview? Thebiguglyalien (talk) 🛸 05:59, 1 December 2025 (UTC)Reply

Perhaps we can mention that it's "human-vetted" or "human-curated"? Even the AI-generated content is (usually) detected, and tagged or removed, rather quickly. However, Thryduulf also has a good point that many articles have at least some non-human input. – Epicgenius (talk) 15:50, 1 December 2025 (UTC)Reply
Even the AI-generated content is (usually) detected, and tagged or removed all we can say is that the problematic AI-generated content is usually tagged and/or removed. Any AI-generated content that is stylistically similar to a Wikipedia article and which contains no errors (e.g. incorrect statements, non-existent references, etc) will almost always not be flagged because doing so wouldn't benefit the encyclopaedia. Accordingly it is impossible to know whether there have been 1 or 1 million edits of this nature. Thryduulf (talk) 18:23, 1 December 2025 (UTC)Reply

Suggestions for new features in Wikipedia

edit

With how popular explanatory footnotes are, a feature in the section of the visual editor citation button for creating footnotes could be pretty useful. A section to the visual editor link button for reusing previous links could be useful considering how many times I find myself linking to the same article. A more secondary visual feature is that instead of citations next to each other being distinct like [1][2], they could be merged like [1,2]. Misterpotatoman (talk) 07:07, 29 November 2025 (UTC)Reply

Like the idea on footnotes.
For reusing previous links, you just need to type '<ref' where you want to put your source in the Visual Editor, and then a pop-up would automatically appear where you would get 3 options 'Automatic', 'Manual' and 'Re-use'.
Merged citations [1,2] would be too close for comfort, and could result in mis-taps on smaller handheld devices, also WP:AINT. Cdr. Erwin Smith (talk) 12:46, 29 November 2025 (UTC)Reply
If you'd like to see merged footnote markers, then see w:fr:Moineau mélanure#Description. The proposal is similar to the style used at the French Wikipedia. WhatamIdoing (talk) 22:13, 29 November 2025 (UTC)Reply
That requires manually inserting fr:Template:, between each <ref>. jlwoodwa (talk) 04:15, 30 November 2025 (UTC)Reply
no, i mean reusing links as the wikipedia feature that let's you link to links, in not talking about citations, also i think if it was merged, it should pull up a screen where all the citation links appear, i think it will actually make it easier on smaller devices. Misterpotatoman (talk) 22:32, 29 November 2025 (UTC)Reply
It sounds like you're thinking about the scenario in which I go from one article to the next to add a link to (for example) Rare disease (real example, BTW), and instead of clicking the link button and typing rare dise in the search box until it pops up the link to the correct article, it would have a list of the most recent ones I've added links to, and I could just click on one of those instead of typing.
As someone who never edits from a smartphone, this would not be efficient for me. But since you're on mobile, where IMO typing is practically impossible, let me ping @PPelberg (WMF) and ask him to please make a Phab task suggesting this new feature idea for the mw:mobile visual editor. WhatamIdoing (talk) 23:53, 30 November 2025 (UTC)Reply
exactly, that's what i meant, and thats even better than my first idea for link reuse. Misterpotatoman (talk) 23:58, 30 November 2025 (UTC)Reply

revamp to section system

edit

navigating a wikipage can be a slow, so what if we had tabs to each section below the main and talk tabs or maybe in the same place as the article pages tab, this would not replace regular tabs and would add to them, there should be a setting to disable them in the settings. Misterpotatoman (talk) 04:54, 30 November 2025 (UTC)Reply

Are you talking about WP:TOC?
If so, then it's disabled by default in Wiki (Web) Mobile (the version you're currently using). I did raise a ticket regarding it 2 months ago, and they recently said they're working on it.[1]
You can use a Desktop/PC if you want to access this feature as of now. Cdr. Erwin Smith (talk) 08:05, 30 November 2025 (UTC)Reply

References

  1. ^ "Make 'Table of Contents' available in Wikipedia (Web) Mobile".
yes, imagine that but mobile. Misterpotatoman (talk) 08:36, 30 November 2025 (UTC)Reply
You can support the wish here. Cdr. Erwin Smith (talk) 12:19, 30 November 2025 (UTC)Reply

Policy on userpages for unregistered users

edit

Hi there, I think that the policy on userpages for temporary accounts should be developed and suggest across Wikipedia. ~2025-37397-24 (talk) 13:19, 30 November 2025 (UTC)Reply

Why is a policy needed? To my understanding is not technically possible for a TA to create their own userpage. CMD (talk) 16:43, 30 November 2025 (UTC)Reply
@Chipmunkdavis Well to ensure that TAs would have fair amount of information described on their userpages. It is definitely possible for Temporary Accounts to have a userpage, if a registered user created for them to edit and add information about themselves. ~2025-37397-24 (talk) 18:03, 30 November 2025 (UTC)Reply
It would not ensure anything, many registered users have little on their userpages. Further, asking registered users to work on something that will become irrelevant in less than 90 days seems a bit of an imposition. Any TA who wants to share details about themselves can do so on their talkpage. CMD (talk) 18:14, 30 November 2025 (UTC)Reply
Any TA who wants to share details about themselves can do so on their talkpage. or create an account, which imho should be the encouraged course of action. Thryduulf (talk) 18:39, 30 November 2025 (UTC)Reply
Indeed. CMD (talk) 16:42, 1 December 2025 (UTC)Reply
Change that to "should be the required course of action" and I'd agree with you. --User:Khajidha (talk) (contributions) 17:01, 1 December 2025 (UTC)Reply

Thoughts on standardized policy around GPTZero or other AIDetect software?

edit

It would be nice to have some essay or guideline or such for WP:GPTZERO/WP:AIDETECT software. As is, there is a good understanding that such software is highly error-prone and subject to an unacceptably high false positive rate, and yet they are also regularly used as additional evidence, often with other signs. I myself have used it as evidence sometimes, though interpretation of such output remains highly subjective.

There seems to be an exponential rise in AI conduct reports at ANI [12], so having more guidance seems useful.
I saw we still lack a useful metric for definitively determining AI usage, but this seems like an easier question to solve, and I think one the community may already have a good idea on. In what circumstances are AI detectors useful, and when should they not be allowed? User:Bluethricecreamman (Talk·Contribs) 16:55, 30 November 2025 (UTC)Reply

I've been thinking about it a bit since i wrote this comment from a report I filed a bit ago here:but i think determining AI is like diagnosing a rare disease, the probability of AI given any one sign is low, but the conjunction of multiple signs, previous use of AI, hallucinated URLs, and human judgement is important to determine AI usage. even gptzero is useful here, though its high FPR should be understood User:Bluethricecreamman (Talk·Contribs) 16:59, 30 November 2025 (UTC)Reply
GPTZero is actually a very good indicator when percentage is high. You can't really "disallow" it. I think people who dismiss it out of hand are missing the boat. Of course it can make false positives, but I've never seen it not detect AI that was in fact AI: a high percentage is a significant data point. Also they keep improving the algorithm it only keeps getting better. -- GreenC 17:24, 30 November 2025 (UTC)Reply
Automatic AI detectors can have false negatives, but generally you have to do some fine-tuning of the output to get it there. (very depressing study btw) Gnomingstuff (talk) 20:58, 30 November 2025 (UTC)Reply
Controversial opinion, but I think if there's considerable human editing in an AI output, it should be allowed. We should also AGF in this whole process.
As such, if the final result of these Anti-AI tools is Unclear/Mixed/Medium/<50% probability of being written by LLMs, we should favour the editor in our verdict. Ofcourse the final verdict should be on the hands of an actual experienced human. Cdr. Erwin Smith (talk) 08:09, 1 December 2025 (UTC)Reply
My opinion has changed. In general if AI is used responsibly so as to not violate wikipedia policies its not technically prohibited. It just so happens, though, that many editors are using it blatantly, without disclosure, and in ways that do violate our policies.
We really shouldnt be trying to identify every bit of AI, an obviously impossible goal. We should consider AI usage an aggravating factor (WP:CIR) when considering other policy violations User:Bluethricecreamman (Talk·Contribs) 14:18, 1 December 2025 (UTC)Reply
I disagree -- a lot of what AI detectors are checking are things unrelated to Wikipedia policy. If an editor generates text with AI, does not verify the claims, but does change the sentence structure enough to get it to 50%, then the core problem has not been addressed. Gnomingstuff (talk) 14:36, 1 December 2025 (UTC)Reply
This is exactly why I said that the final verdict should rest in the hands of an experienced human.
Also, LLMs are getting better rapidly with every passing generation, and can not lie like Humans. If you instruct them to write a Wiki article, they will write so abiding by most, if not all the existing policies.
Ofcourse, they can still make some errors, and that's why we Humans are here to weed such articles out!
So basically,
  • >50% AI Probability > Insta-discard
  • ≤50% AI Probability > We decide with leniency
Cdr. Erwin Smith (talk) 19:06, 1 December 2025 (UTC)Reply
Given the error rate of most of the checkers they should pretty much never be used on their own and when they are used taken with a huge grain of salt, while keeping WP:AGF in mind. So with that I wouldn't be opposed to that being banned as a main resource on the topic. PackMecEng (talk) 17:48, 30 November 2025 (UTC)Reply
Given the error rate of most humans who claim to have spotted AI use, we shouldn't rely on human detection, either. The accuracy rate is barely better than a coin toss for editors like me (who don't use AI tools regularly). This study says that power users of generative AI tools may "only" be mistaken 10% of the time. I wonder how that compares to the common AI-detecting tools? WhatamIdoing (talk) 00:58, 1 December 2025 (UTC)Reply
Oh for sure, people are not great at it either. All options kind of suck at the moment. PackMecEng (talk) 01:44, 1 December 2025 (UTC)Reply
Sometimes a duck is a duck, but I agree with the general sentiment that we are stuck between a rock and a hard place here Katzrockso (talk) 08:14, 1 December 2025 (UTC)Reply
That 10% error rate is mostly false negatives (that is, sometimes AI can slip past even expert eyes) and included LLMs that were specifically tuned to defeat detection. The consensus opinion of the experienced humans in that study correctly identified 99.3% of AI-writted articles as AI, and never once falsely identified human-written text as AI. Quoting from the article Despite our best efforts to generate articles that our experts would find undetectable, most of their detection rates remain largely unchanged from prior experiments, and the expert majority vote is again perfect on all 60 articles. -- LWG talk 16:46, 1 December 2025 (UTC)Reply
They should certainly never be used on their own, as they have a high rate of both false positives and false negatives. In conjunction with other signs they can be interesting but never particularly useful as whenever there are enough other signs that you can trust the output those other signs are enough to be determinative on their own. I'd support a page explaining this. Thryduulf (talk) 18:43, 30 November 2025 (UTC)Reply
One sign that I think may be helpful is "paste check" (detecting when a new editor copy/pastes a large block of text into an article). However, it triggered only a couple of times recently, and it's not specific to AI. It could be copyvios, someone pasting in a valid quotation, or other things. WhatamIdoing (talk) 01:01, 1 December 2025 (UTC)Reply
Some folks seem to also prefer using google docs for creating articles, the mad lads. User:Bluethricecreamman (Talk·Contribs) 01:26, 1 December 2025 (UTC)Reply
From what I understand, paid AI detectors are much better than the free ones in terms of accuracy, at least for the kind of generic, non-fine-tuned output that we are probably getting here, and are very good at this point. Pangram seems to be the best-performing.
That said I don't personally use AI detectors, if only because of optics -- I don't have the patience to deal with endless "well automatic AI detectors get it wrong so please take your ugly tag off MY beautiful writing." Gnomingstuff (talk) 20:57, 30 November 2025 (UTC)Reply
Note: I've gone ahead and boldly made a section on WP:AISIGNS for WP:GPTZERO. Feel free to add collective wisdom. User:Bluethricecreamman (Talk·Contribs) 05:20, 1 December 2025 (UTC)Reply
Two thoughts:
  1. Even though no tool is perfect, in my experience tools like GPTZero are very accurate at detecting raw LLM output that was copy-pasted into Wikipedia. In the study WhatamIdoing linked GPTZero had a 0% false positive rate on every test set except the one where the LLM was specifically fine-tuned to defeat AI detection. I have never yet seen GPTZero return 100% AI on human-written text or 100% human on copy-pasted LLM output. So for our use case, where our primary concern is novice users who copy-paste large quantities of slop, we can expect the tool to be helpful, and it would be counterproductive to tell people to ignore it.
  2. What are the consequences of the tool being wrong? If the tool gives a false-negative, the result is that we fail to detect AI content, which is the same outcome as if we don't use the tool at all. If the tool gives a false positive, the result is that we incorrectly believe content to be AI-generated, possibly leading to the content being reverted or the editor being asked to explain and justify their contribution. But if the content is not in fact AI generated, then all the editor needs to do is accept their WP:ONUS and acquire consensus for inclusion of their content, which is the same as the normal wiki process.
So basically, I don't understand what harm we are trying to prevent by discouraging editors from using AI detection tools to assess content. -- LWG talk 16:28, 1 December 2025 (UTC)Reply
The harm is false positives leading to harassment and sanctions. Also the study notes that humans were far more successful than AI detectors in this. So let's not give a not terrible tool amd say its accurate. The study does not fully support that and I do not understand the help it gives on its own given its well know deficiencies and the harm it can easily cause to our most vulnerable user base. PackMecEng (talk) 18:29, 1 December 2025 (UTC)Reply
How are we defining "harassment and sanctions" here? Gnomingstuff (talk) 00:44, 2 December 2025 (UTC)Reply
The usual way. PackMecEng (talk) 13:56, 2 December 2025 (UTC)Reply
So if I understand correctly, you are concerned that a new editor who does not use AI might still write edits that GPTZero identifies as AI, and that would cause that editor to be inappropriately blocked, or to become a target of Wikipedia:Harassment? That seems unlikely, since editors aren't normally blocked based on one bad edit with no chance to explain themselves. If someone just hates new editors and decides to falsely accuse one of them of using AI without giving them a chance to explain themselves, the accuser will get WP:BOOMERANGed. -- LWG talk 14:51, 2 December 2025 (UTC)Reply
it is naturally stressful to deal with claims that your work may be AI. it sounds like an insult, it makes one wonder why an editor is picking a fight, trying to involve additional folks through any noticeboard may escalate the situation.
its not even about accusing, its naturally stressful to be randomly flagged. we should ofc ask and investigate as appropriate, but we should not be doing a giant fishnet unless theres broadly more policy vios User:Bluethricecreamman (Talk·Contribs) 15:11, 2 December 2025 (UTC)Reply
If by fishnet you mean running some sort of broad scan of all new contributions and reporting every hit to ANI, then I agree with you. But I think choosing to engage in this community means exposing your writing to scrutiny, and if the stress of having to explain your contributions is too much for you, then a collaborative encyclopedia that seeks to have high sourcing standards is probably not the place for you. If a new user contributes large quantities of text without explaining themselves, I think it's reasonable to run their edits through GPTZero, especially if subjective tells of AI writing are present. If GPTZero returns a high AI percentage, I think it's entirely reasonable to reach out to the editor asking for an explanation, and to remove the content if no satisfactory explanation is given. We aren't under obligation to give content the benefit of the doubt here, the WP:ONUS is on the contributor. False content is much more harmful to the Wiki than missing content, since missing content can always be added eventually, but false content calls the entire rest of the wiki into question. It's also much easier to identify and add missing content than to identify and remove false content. -- LWG talk 15:32, 2 December 2025 (UTC)Reply
Actually I think you made a very good point without realizing it. AI generated content is not in and of itself a reason to revert something. That falls to if it is poorly or falsely sourced. If you are reverting because it was a large block of content and you ran it through a dubious AI detector with it coming back positive, you need more than that to revert it otherwise you are the problem there. That seems to be the general rub, blanket this is bad and going after people as you just described is the problem we are talking about. Heck there was even a recent ANI thread where someone was trying to mis-apply G5 and G15 and finally then when that didn't fit try to IAR just because it was AI.[13] PackMecEng (talk) 17:19, 2 December 2025 (UTC)Reply
Yes, some people take the position that all AI-generated content is inherently bad and has no place here, whether for ethical or copyright or content quality or general project vision reasons. That's not my position, I'm with you that the problem with LLM content is that it frequently fails other policies, however it's also my position that we don't need LLM content, and we're currently facing a flood of bad LLM content that is overwhelming our normal mechanisms for dealing with bad content, so if this community can't find any way to navigate between the two slippery slopes here then I'd rather slide down the one that leads to no AI. -- LWG talk 17:46, 2 December 2025 (UTC)Reply
This is a good way of putting it. LLMs completely invert the effort difference between writing and reviewing. The issues that led to the WP:MASSCREATION policy, but possible with any text anywhere. CMD (talk) 02:10, 3 December 2025 (UTC)Reply
It's also incredibly bad optics, especially when WMF is currently in the middle of an advertising drive about how Wikipedia is the great human alternative in the age of AI -- an advertising drive no doubt directed at prospective donations from people who support Wikipedia for exactly that reason -- when in reality a substantial amount of articles have quietly been AI-generated in part or full for several years. I'm actually kind of shocked that the media hasn't picked up on the fact that Wikipedia has only just now gotten around to creating real AI guidelines, given the response to the Simple Summaries debacle earlier this year.
So yes, we absolutely should be doing a "giant fishnet" to determine the extent of the problem. If we had started doing that in November 2022 like we should have, then it wouldn't be a "giant" undertaking, but we didn't, and so now it is. Gnomingstuff (talk) 03:04, 3 December 2025 (UTC)Reply
Not really though, because if it has not been a problem for years and most dont know it was AI generated why remove it? Seems counter to bring here to build an encyclopedia. PackMecEng (talk) 13:58, 3 December 2025 (UTC)Reply
Just because no one noticed a problem until recently doesn't magically make it not-a-problem. To take an extreme example, blatant vandalism has sometimes undetected for 10+ years. Gnomingstuff (talk) 17:10, 3 December 2025 (UTC)Reply
The point is that if you read a given bit of text and there are no problems with it, either stylistically or factually, then it does not become a problem when you find out it was (or might have been) written by (or with the assistance of) an AI. Thryduulf (talk) 17:21, 3 December 2025 (UTC)Reply
And my point is that just because no one changed a piece of text doesn't mean there are no problems with it. Gnomingstuff (talk) 23:51, 3 December 2025 (UTC)Reply
That's a strawman argument. I specifically said if there are no problems with a given bit of text, not that there were problems which hadn't been noticed. Text being AI-generated is not a problem in and of itself. It might contain problems, for example stylistic errors, factual errors, non-existent references, etc, but it is possible for every single one of those problems to also be present in human-written text. The different types of problem occur at different frequencies in differently-originating text (AI-generated text is very significantly more likely to include meta comment, human-generated text is very significantly more likely to include spelling errors) but the only type I can think of that only ever appears in one but not the other is copy-paste errors (e.g. copying one too few characters) and that's a mistake only humans make (although LLMs can obviously propagate such errors I'm not aware they can originate them). In at some circumstances LLMs are more likely (but not guaranteed) to produce text with issues than an equivalent text produced by humans (a 1000 word submission is more likely to contain issues than a 10 word submission, regardless of origin), but such problems are identifiable specific things not the mere fact of being written by AI. That is to say that the problem with a text containing a non-existent reference is that the reference does not exist, not that it might have been written using AI.
Text that objectively contains no issues when assumed to be human-written still contains no issues when alleged (or even proven) to be LLM-generated. Thryduulf (talk) 04:10, 4 December 2025 (UTC)Reply
There's several problems with this line of argumentation, which you've repeatedly expressed in every AI discussion I've seen you in. I recognize that your position is coherent, but the lack of recognition from you that other positions are also coherent is getting tiresome.
1. It's clear from these discussions that for many people (both editors and readers) text being AI-generated is a problem in and of itself, whether for ethical grounds due to the provenance of the technology or the economic dynamics of its implementation, or for legal concerns about the still-developing copyright landscape in the field, or for philosophical reasons about the overall vision of our project, or for whatever other reason.
2. Even setting that aside, AI text is still qualitatively different than human-written text in that the authorship is different, and authorship can change the acceptability of text totally independently of content, see WP:NOSHARE and WP:COIEDIT. So it's not automatically a given that all edits can be judged purely by the bytes they contain.
3. Even setting that aside, in the real world we never actually get your hypothetical "text with no problems in it", because our ability to assess text is not perfect. All we get is text with no known problems, which is acceptable if the text has has adequate scrutiny. Unfortunately, our resources for scrutinizing text are dramatically inadequate to the scale of the task, so we constantly prioritize our attention with various heuristics. Because the types of errors that tend to come up in AI text are different, the type of scrutiny they require also tends to be different, so knowing whether text is AI generated may change whether we feel it has received the scrutiny it needs.
All three of those are very valid reasons why text that we would accept when written by a human might be rejected or subjected to additional scrutiny if we later discover it was written by an AI, and even from your position as I understand it point 3 should motivate different treatment of AI content. -- LWG talk 05:12, 4 December 2025 (UTC)Reply
  • I think many editors are quietly using AI as a co-worker. They are careful about following policy, verifiability. They use GPTZero and other tools to check their work and copyedit. There is considerable human involvement. It's not "AI generated", it's something else, it's a mixture. We don't have a good name for this, most discussions revolve around the worst case scenario of a chatbot-cut-paste-save. We would be fooling ourselves to ban AI entirely, and when used appropriately, what difference does it make, it's part of a complex process of humans and machines working together. Statistical fuzzy matching algorithms are the basis of spell checkers and search. They are often incorrect and cause problems. We still use them because humans are in the loop error-checking. -- GreenC 17:53, 3 December 2025 (UTC)Reply
    If they edit the model output to unrecognizability or don't use it directly then it won't be detected at all and accusing AI use at that point would be frivolous and aspersive (?) without more evidence. ~212.70~ ~2025-31733-18 (talk) 18:08, 3 December 2025 (UTC)Reply
    At least some LLM-detectors (both human and non-human) flag non-LLM text as being AI (a false positive). I've seen (both on Wikipedia and elsewhere) humans who suspect someone of using an AI repeatedly hound that person if they do not admit to using AI - regardless of whether they have actually used AI or not. This is exactly as unacceptable as hounding an editor for any other reason. Thryduulf (talk) 19:27, 3 December 2025 (UTC)Reply

Alternate to blocking - flag edits for pending change review

edit

We have the pending pages review mechanism for semi-protected pages. Perhaps, instead of blocking an editor, their edits could be automatically flagged as a pending change. It would stop them as effectively as a block and it would create an opportunity to educate them. It would allow them to continue the dialog.

I see this option being selected mainly for new editors. Constant314 (talk) 18:00, 1 December 2025 (UTC)Reply

With the current pending changes implementation, an article with pending changes protection continues to have a single, linear history of changes. Implementing a way to flag an individual edit as requiring review, while still allowing others to make changes that are visible to non-logged in readers, would require implementing a branching history, and would require someone to merge the pending change if approved. It would be more complex for the editor in question to make successive unreviewed pending changes to the article, as they would have to understand the branching model. It would be a significant amount of development effort, changing fundamental aspects of how articles are stored and edited. isaacl (talk) 19:48, 1 December 2025 (UTC)Reply
Good-faith editors who ignore repeated talk-page warnings are often partially blocked from article space, which forces them to engage in discussion with no restrictions on where the discussion occurs. How would this proposal be more effective than a partial block?
If an editor fails to change in response to repeated talk-page warnings for edits that had to be reverted, how would this restriction convince them to stop editing disruptively, given that they can still edit almost as freely as before?
Pending changes is mainly intended to filter blatantly inappropriate drive-by edits such as vandalism and spam. If a user is "pending-changes restricted" for a subtle or complex issue, would reviewers be expected to check for that issue? Helpful Raccoon (talk) 04:45, 2 December 2025 (UTC)Reply

Better Citation Tool?

edit

I'm new here, so if this idea has already been discussed or should be posted elsewhere, please tell me. I think that Wikipedia's cite tool, the one that comes up when you press the Cite button on the visual editor, should be improved to allow uploading a .RIS file and automatically populating all of the necessary fields based on that. I got the idea by using tools like Scrible, which have this functionality. I was thinking you could just implement this as an upload button in the Automatic tab of the Cite button, but I would be happy if it went anywhere in the tool. Unfortunately, I have no idea how to code this, so I would need a lot of help making it to the WP:VPR. Hopefully we can make this work. Mxwllhe (talk) 17:23, 2 December 2025 (UTC)Reply

More information on RIS. Mxwllhe (talk) 17:34, 2 December 2025 (UTC)Reply

A new space for all these proposals for proposals relating to problematic LLM usage ?

edit

It's taking up like 80% of this page. I have no formal proposal, but it might be a good idea to have a separate talk page/notice board for this. -1ctinus📝🗨 20:47, 2 December 2025 (UTC)Reply

Wikipedia talk:Writing articles with large language models additional participation is welcomed, especially as a lot of the discussion here is redundant to discussion already happening over there. -- LWG talk 20:57, 2 December 2025 (UTC)Reply
Maybe a banner would be helpful to redirect prospective posters? -1ctinus📝🗨 22:27, 2 December 2025 (UTC)Reply
That's a good idea -- it's a little concerning how many people weren't aware this RfC even happened (not saying it's their fault, the topic just seems strangely under-publicized somehow despite taking up volumes of space) Gnomingstuff (talk) 03:24, 3 December 2025 (UTC)Reply

Make it mandatory to fill basic information within the government_type= of an infobox when writing about countries

edit

When I look at a country's infobox and see that the type of government listed in the infobox is vague or missing critical details like with "Republic" or "Fascist dictatorship", I really dislike that as you're gatekeeping the information from our readers on what type of government it is, seems empty and missing to look at, and is unwikipedian of us.

And so, my proposal is that when you're writing a country (whenever it's either in the present or past tense of history), that you fill in three basic criteria:

  1. Is it Unitary? Federal? Confederal?
  2. Is it Presidential? Parliamentary? or something else entirely?
  3. Is it either a Republic or a monarchy?

The sentence structure should be something like Federal presidential republic, Unitary absolute monarchy, or Confederal directorial principality.

It can also cause something that I would call a useless conversation, something like this:

  "Hello! what type of government is the Gambia?"

  "Presidential Republic."

  "But is it Unitary or Federal?"

  "Presidential Republic."

  "Could you give me more details on the type of government?"

  "Presidential Republic."

  "Is this working?"

  "Presidential Republic."

It doesn't make sense to leave out correct and good information and overall just looks bland.

If you have any questions, just ask and I'll try to answer.

GuesanLoyalist (talk) 11:12, 3 December 2025 (UTC)Reply

We can’t make it mandatory to do anything (see WP:PRINCIPLE), but we could make this part of the guidance at WP:COUNTRYLEAD or Template:Infobox country if others think it’s a good idea (notified WP:COUNTRIES) Kowal2701 (talk) 12:01, 3 December 2025 (UTC)Reply
The guideline seems to work fine as I can see that as a good compromise and what I can see as improving.
GuesanLoyalist (talk) 20:35, 3 December 2025 (UTC)Reply
A very bad idea, based on a complete misunderstanding of how articles are created and how they evolve over time. We don't police articles to ensure they comply with arbitrary criteria invented to correct 'blandness' or some strange urge to emulate WikiData. More so when things like 'government type' are frequently contested and per policy shouldn't be reduced to bald assertions in infoboxes anyway. AndyTheGrump (talk) 12:13, 3 December 2025 (UTC)Reply
Infoboxes should only include the most important information. They are generally too bloated, not too short. (For example, the infobox at United States still helpfully converts GDP from US dollars to US dollars and lists both values.) If it is not important to scholars of the Gambia whether it has a unitary or federal system of government, then that shouldn't be in the infobox. I also agree with Kowal that we can't make anything "mandatory". Toadspike [Talk] 12:53, 3 December 2025 (UTC)Reply
GuesanLoyalist: please keep in mind that {{Infobox country}} is used in over 7,000 articles, including articles that are not about countries. The |government_type= parameter is currently used in only about 3,800 of those. Using some technical means within the infobox to require a value in |government_type= would not make sense for at least some of those articles (e.g. Benelux, Council of Europe, Central America, Central Powers). – Jonesey95 (talk) 01:29, 4 December 2025 (UTC)Reply
Sorry, I have not known about that
Maybe make it a basic guidance like what @Kowal2701 suggested for the actual countries?
GuesanLoyalist (talk) 01:38, 4 December 2025 (UTC)Reply

Recruiting expert editors

edit

One of the main issues with the most important pages is they require expert editors on the topic to improve them to GA status. These people are busy IRL, and are unlikely to take Wikipedia seriously. Peer-reviewed journals get these people to review for free, and this can count as service for tenure packets. One issue with using Wikipedia for this is that accounts are generally anonymous, and anyone can claim to be anything or anyone here. Recently we introduced temp accounts, could a non-anonymous account that requires a .edu email to sign up for, combined with some collection of access to sources and letters of thanks that tracks service that could be put in a tenure packet, be possible/useful? Is there anything else that could be used as bait for expert editors? GeogSage (⚔Chat?⚔) 18:49, 3 December 2025 (UTC)Reply

Possessing a .edu email address (or equivalent) is not restricted to subject experts or even just academics. For example by virtue of being a life member of the computer society at Swansea University, which I got being serving as the society secretary for a year about 25 years ago, I have an @swan.ac.uk email address despite not even being a graduate. I have a friend with a dot .ac.uk email address because they work as an administrator at a sixth-form college.
Secondly, not everybody who is a subject matter expert is an academic and/or works in academia. I have acquaintances who are experts in different aspects of railway history but they are retired railway professionals not academics. I spoke with one of them a few years ago about editing Wikipedia, but they were simply not interested - their primary interest was in conducting the original research. There is also the issue that much of what they would want to write about if they were interested in doing so would be regarded as too niche for a general purpose encyclopaedia. Thryduulf (talk) 19:48, 3 December 2025 (UTC)Reply
I'm aware that you don't need to be an academic for a .edu email, it is one possible limit though, especially if the email is made public and the account is not anonymous. Trying to recruit experts outside academia is another challenge, I'm trying to focus on one approach to getting one possible group of people who have a potential institutional motivation to do service. If you have suggestions on ways to recruit and motivate other groups of experts like those you mention, please suggest it. GeogSage (⚔Chat?⚔) 19:59, 3 December 2025 (UTC)Reply
The WMF has programs like Wikimedian in Residence that encourage universities to support Wikipedia by encouraging them to create Wikipedia-oriented positions for academics. But that involves a lot of resources to get a single position at a university. I wonder if we could encourage more editors by asking the WMF to also try encouraging universities to promote Wikipedia as a option for fulfilling faculty service requirements.
On the front of experts outside of academia, expanding Wikipedia Library offerings and publicizing them more might attract some contributors. signed, Rosguill talk 01:41, 4 December 2025 (UTC)Reply
If we could get universities to accept Wikipedia work as service, through whatever means, I suspect we would have a large volume of academics editing. I use Wikipedia as a means to help me actually read the stack of PDFs I download for work on other projects and broaden my understanding of my discipline, the instantaneous gratification of including a source or bit of information is a great motivator, but most professors I know consider it a waste of time they could spend on things they get credit for. Even if the University doesn't consider it as part of a tenure packet, "verified" profiles could help overcome this by allowing a professional to demonstrate some outside work in a qualitative way (even outside academia). GeogSage (⚔Chat?⚔) 02:19, 4 December 2025 (UTC)Reply