Wikipedia talk:Wikipedia Signpost

Wikipedia

The Signpost
WT:POST
Feedback

Are the tags still supported?

Side note: What I really wanted (XY problem) was "Archives by column" – every "In the media", every "In focus", every "Humour", etc.

The archives link to /Series, where I found the tag Wikipedia:Wikipedia Signpost/Tag/humour. For some reason, the latest articles there are Wikipedia:Wikipedia Signpost/2023-12-24/Humour and Wikipedia:Wikipedia Signpost/2024-01-31/Comix.

Why are later humour related articles not tagged? E.g. Wikipedia:Wikipedia Signpost/2024-02-13/Comix and Wikipedia:Wikipedia Signpost/2024-03-29/Humour. —⁠andrybak (talk) 21:40, 25 December 2025 (UTC)

Wrong version of the map

The Venezuela map that was used in this edition's Traffic Report was the SVG version instead of the PNG version. The SVG version is not updated and loading fonts incorrectly. There have since been updates to the map and the SVG version is out of date. Would someone mind fixing this on the Signpost? Thanks, Chorchapu (talk | edits) 17:40, 15 January 2026 (UTC)

LLM and the Signpost

A disclaimer, which reads: "This article was written by an editor who incorporated copyedits from Claude Opus 4.5. Claude is a closed-source large language model sold by Anthropic PBC; those who find this offensive, disturbing or unpleasant may wish to avoid reading it." was added to Wikipedia:Wikipedia Signpost/2026-01-15/Special report after numerous complaints in the Discussion section.

This is the first time I've seen a disclaimer of this sort, so I have a couple of questions: Have there been any other articles published by the Signpost that were largely written by LLMs? What's the Signpost's official stance or policy on publishing articles that are largely written by AI/LLMs? (e.g. If an author uses an LLM to help write their article for the Signpost, are they required to disclose this? Or are they not required to disclose this?) And do you think the answer to the second question should be added to Wikipedia:Wikipedia Signpost/About for transparency purposes? Some1 (talk) 07:00, 18 January 2026 (UTC)

Sure, walltext in the morning. jp×g🗯️ 14:14, 18 January 2026 (UTC)
The original post is 167 words; I think it's survivable. ✶Quxyz✶ (talk) 14:17, 18 January 2026 (UTC)
This is the first time I (and I'm sure others) have encountered a Signpost article where AI/an LLM was used "to help write" it. I look forward to reading the wall of text, which I hope answers the three questions in detail. Some1 (talk) 14:34, 18 January 2026 (UTC)
Quoting myself:
I can't really figure out what to think about this. The main precipitating point of offense seems to be that the guy who wrote it used Claude to copyedit it. I do not remember anyone responding in this way to the (much lower-quality) GPT-3 assisted articles we ran in summer '22; at that time nobody seemed bothered by it, most people did not really have an opinion one way or the other, and to the extent the presence of a LLM was noted at all it was complimentary. Of course, that was a bit of a stunt (it cost me about ten bucks to run all the queries, and then I had to spend about twice as long to write the thing in tiny sections, then stitch them together then fact-check them). But I don't really get how the situation changed between then and now, other than a bunch of politics stuff.
You can go read these if you want; I believe they were the deletion report and the arbitration report in the August issue of that year. There was no negative feedback that I recall, and I offered a bounty of ten bucks to anybody who could spot a factual error in either of them.
It was never claimed.
Mostly, this was not done again because of the great expense and the tedious process of distilling the input to be parsed and stitching together the output (at the time context windows were far shorter). It also took a very long time to fact-check it, and this was at a time when GPT-3 (precisely, gpt-3.0-davinci) was very prone to making errors.
The Signpost has never had a formal policy on the use of LLMs, because it's just not a thing that ever comes up. Apart from the two articles from 2022 that received pretty good feedback and generated no complaints, there has never been any cause to do it. I have never done it myself. There may be some slop among the declined submissions. People submit stuff all the time that we do not run; we get stuff with unclear or no relevance to Wikimedia projects, we get drafts that are obviously not finished, we get inchoate screeds, we get jeremiads and shitposts. I am generally in favor of publishing whatever we get that is decent and not going to get us sued or annied.[1] I have never published a submission that felt slopesque to me, besides this one. I do not run submissions through a "detector", which all feel to be about as reliable as dowsing rods.
Another quote from myself:
When I publish something in the Signpost, it is because I think it is interesting, informative, entertaining, or wise. In the event that it fails to produce the same impression in others, it can usually at least provoke some discussion that has these qualities. The usual way I find out if people think an article rules or sucks is that they leave a comment saying something like "this rules" or "this sucks". I am in favor of this, because otherwise I don't really know how to predict what kind of thing people want to read. Sometimes I will think something is kind of meh, and everyone will love it, and it'll be the biggest article of the whole issue. Other times, I will put a ton of effort into something I expect to pop off massively, and then nobody cares. Anybody who wants to nominate the whole Signpost at MfD because there was an article they thought was lousy once is free to do so.
I do not require that articles agree with my own personal views, or that they be written in my own personal editorial style; if I did this, I think the result would be a very lousy newspaper, and really less of a newspaper and more of a blog.
In this case, the guy who wrote it had been the chair of the WMF board and now had a bunch of stuff to say about the future of the project; I think the things he brought up are relevant and that figuring out how we want to handle them is important. I do not require that everyone who submits an article have English as their first language; there were some vaguely corny and/or slopescent flourishes around the edges of the article, which I figured were mostly irrelevant to the central idea, and not much of an obstacle (most people seem to have had no problem reading it).
I guess the situation here is that some guy wrote an article that some people think was lousy, some on account of they thought it was badly written, and some on account of the guy having the computer help him write it.
Some years ago, an opinion piece was submitted to the Signpost, which I wanted to publish, but objected to some formatting choices (namely that the author had set large amounts of it in brightly colored neon text). They were of the opinion that this formatting was crucial to the piece, and felt that casting it into the normal monochrome text would have made it unsuitable for publication, so it ended up not running. Of course, it is not the case that being hot pink and green somehow makes the text false, or causes it to say inappropriate things; my reasoning at the time was that it would unduly distract from the content of the piece (which I thought was good), and that people would just hem and haw about the formatting and there'd be all this drama which had nothing to do with what the article said.
As far as ex cathedra statements in my capacity as editor-in-chief, I guess it is more or less a similar situation to this; normal journalistic guidelines about writing style don't really cover stuff like typesetting an article in Comic Sans, but it seems condign to say that any stylistic choice likely to cause significant acrimony probably ought not to be made, and seeing as this now does, it probably now oughtn't.
Outside the realm of official statements, the interested reader may wish to note that I created WP:LLM, and wrote WP:LLMP, a proposal in early 2023 that would have required attribution for on-wiki use of LLMs. It received about two-thirds support to implement it as a guideline, but the RfC was closed and marked as a failure due to the closer not thinking that was a high enough percentage (due to counting the supports for one RfC option as actually supporting different versions of that option, meaning neither had a majority, meaning it was opposed).
WP:LLM (whose early revisions looked like this) met a more grisly fate: it was edited vociferously by dozens of people who added a gigantic amount of junk to it, then once it was the size of the planet Neptune, someone plopped a RfC on it, which predictably failed because it was by that point suggesting about 200 different things. The actual original thing I wrote, at the beginning of that, was later proposed by somebody else, and then approved as a guideline, which is WP:NEWLLM. I am more or less burnt out on trying to propose Wikipedia policies, since mostly what happens is that a bunch of people scribble silly stuff all over the thing you wrote, it fails, you write another thing and don't let people scribble on it, and then it's overwhelmingly supported and someone decides to not count half the supports for no apparent reason and it gets closed as a failure, then about three years later somebody else proposes the same thing and gets the Nobel Prize for it or whatever. So it goes. I've also personally deleted a lot of slop. A lot of the people who have decided that this thing sucks in the last couple of years have done less about it than me. Technological development is a transformational force that lies at the heart of human civilization's rise and fall and there is not a snappy sound bite to whether it's good or bad and I am sorry if my opinion does not fit into a tweet/skeet/xeet. jp×g🗯️ 20:23, 29 January 2026 (UTC)
There's a strong norm that LLM-generated text shouldn't be tolerated in articles and in project-space discussions. I don't see why the Signpost would be any different. Mackensen (talk) 16:54, 18 January 2026 (UTC)
Well the Signpost was experimenting with LLM's as far back as August 2022. But readers have always been explicitly told when content was LLM-generated, as far as I recall. Graham87 (talk) 06:51, 19 January 2026 (UTC)
Even in this case, I would be hesitant to allow more LLM-authored content. The only time I could see it being reasonable is if there is significant commentary regarding the LLM content in regards novel foci or if there is a significant development since the last story on that focus. ✶Quxyz✶ (talk) 15:00, 19 January 2026 (UTC)
@Some1, Quxyz, Mackensen, and Graham87: So far as I know, this is the first Signpost story using AI content, aside from the 2022 experiment which Graham mentioned and which happened when AI had a much different social place. Would any of you like to draft a recommended Signpost stance on AI content? I would appreciate that, and I expect so would JPxG. Like so many things on Wikipedia, The Signpost is a community effort. Instead of answering the questions here, I can confirm that there is not policy in place, and invite you to propose some. Bluerasberry (talk) 00:42, 20 January 2026 (UTC)
Thanks for responding. I'd personally like to see a Signpost "policy" requiring that any article written with the use of an LLM include a disclaimer at the top, specifying what the LLM was used for. The disclaimers could be something along the lines of:
This article was written by an editor who used [ChatGPT/Grok/Gemini/Claude/etc.] to help [write/translate/copyedit/create tables/etc.].
This article includes [imagery/videos/media] generated by [DALL-E/Nano Banana/Sora/Midjourney/etc.]. Some1 (talk) 02:04, 20 January 2026 (UTC)
I would also add a recommendation that the generated content needs to be novel and necessary. ✶Quxyz✶ (talk) 02:58, 20 January 2026 (UTC)
Honestly, the only thing that article had going for it was the assumption of human passion. I copyedited it, and was constantly zoning out, because it was nigh unreadable. And then had most of the edits reverted. Adam Cuerden (talk)Has about 8.8% of all FPs. 11:54, 21 January 2026 (UTC)

Discussion at Wikipedia:Village pump (idea lab)

 You are invited to join the discussion at Wikipedia:Village pump (idea lab) § Signpost on Main page. Vestrian24Bio 12:29, 18 January 2026 (UTC)

bug in function getPreviousIssueDates of SPS.js

Please see the new bug report at User talk:JPxG/SPS.js#bug in function getPreviousIssueDates. —⁠andrybak (talk) 17:27, 29 January 2026 (UTC)

Ccing my reply from there:
For a long time I simply accepted that SPS would fuck up the previous/next links every once in a while, and that figuring it out would be some horrible all-day project. This is actually very simple to fix, with the work you have done, so my giant congratulations for applying some genius to the problem here. My guess is that this may have been originally meant to account for redirects in Signpost article space, prior to the apfilterredir thing(?), which occasionally existed over the years due to publishing errors/script hiccups/typos/etc. However! A couple years ago I went on a massive spree and fixed all of these, so it should no longer be a problem. The only other thing I can think of is that, occasionally, articles have subpages for poll results/etc; but this is uncommon enough it can be fixed when/if it happens. Thanks! jp×g🗯️ 18:27, 29 January 2026 (UTC)
  1. Not going to get us justifiably annied; anybody can open a thread for any reason at all, so avoiding getting annied in toto is impossible.