Talk:Opinion polling for the 2023 Australian Indigenous Voice referendum

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Removal of April Newspoll[edit]

@DTH89: can you please explain your removal of the April Newspoll? You say it's an aggregate of three other polls and isn't new information. To quote from the Newspoll release, the data is based on analysis of surveys from 1 February to 3 April conducted by YouGov. The last YouGov poll released was in January (it's in this article), so how can the Newspoll release not be new information? 5225C (talk • contributions) 06:00, 6 April 2023 (UTC)[reply]

For the record, I have since re-inserted an April Newspoll with just the April results rather than the February–April aggregate. The YouGov poll was separately conducted for The Daily Telegraph. – Teratix 02:24, 21 April 2023 (UTC)[reply]

Some outside criticism[edit]

Kevin Bonham has pointed out a few problems with how this article currently presents information:

"The graphic on the Wikipedia page devoted to this issue lumps the Yes/No polls and the Yes/No/Undecided polls, as a result of which its current estimates sum to 110%. The graph shows Yes as being relatively stable, but what has happened in recent months is that the sample space has been increasingly dominated by Yes/No polls, and this is creating a false appearance of stasis in the Yes vote. On a two-answer preferred basis, what is going on is different."

I can think of a few ways to fix this problem:

  • Convert Yes/No/Undecided polls to Yes/No polls for the overall graph by excluding undecideds and recalculating percentages (e.g. a 45/45/10 poll becomes a 50/50 poll).
  • Disaggregate Yes/No/Undecided polls on a separate graph.
  • Both of the above.

What does everyone else think? – Teratix 14:59, 7 April 2023 (UTC)[reply]

 Done, option 2. – Teratix 02:24, 21 April 2023 (UTC)[reply]

Other things to improve[edit]

  • Only listing responses by state-level when a reasonable sample size is available, i.e. if we have a national poll with 1000 respondents of which like 50 are Tasmanians, we should not just blindly report the poll's figures for Tasmania.
  • Making it clear the Date column is for when polling was conducted, not when it was released  Done
  • Maybe adding a column for polling method  Not done, on further investigation almost every poll is online
  • More prose (Opinion polling for the United Kingdom European Union membership referendum might be a model here)

Teratix 15:16, 7 April 2023 (UTC)[reply]

Updated – Teratix 02:17, 21 April 2023 (UTC)[reply]

Simulating Yes/No results from Yes/No/Undecided questions[edit]

Several pollsters ask a Yes/No/Undecided question but do not ask a Yes/No question: chiefly Newspoll, but also Roy Morgan and SEC Newgate. It's possible to simulate Yes/No results from the Yes/No/Undecided results by discounting people who were undecided and recalculating Yes/No percentages based on the new sample. Should we do this?

I don't think we should, because it presumes undecided people will split in a particular way. Even if we determine this presumption is justified, this would seem to be original research on our part. And it would create inconsistency in how the Yes/No column is presented, since some results would come from an actual forced-choice question whereas others would come from calculation. – Teratix 02:14, 21 April 2023 (UTC)[reply]

I agree that we should not extrapolate the data like this, especially given that many of the undecideds appear to be moving to a 'No' position in recent polls. Also agree on the inconsistency issue Jltech (talk) 13:00, 21 April 2023 (UTC)[reply]
I couldn't disagree more with Teratix-from-three-months-ago. He claims simulating results in this manner would be original research – but clearly he had not read as far as WP:CALC, which specifies routine calculations and basic arithmetic are perfectly acceptable. He worries about inconsistency in how the results are presented, but he hadn't anticipated that we could note the discrepancies with footnotes and differentiate based on decimal places. He also did not anticipate that more polling aggregations would become available, all of which would simulate binary results without problems. – Teratix 18:18, 18 July 2023 (UTC)[reply]
I had another think about it last night, and I have changed my mind, I think you are right. However I don't think we should be using decimal places, because it implies and unwarranted sense of accuracy (i.e too many significant figures). I think we should instead italicise them, which could be explained in a footnote. Also I didn't realise that you were the same person as who wrote here a few months ago, so I'm sorry for not noticing that. 2204happy (talk) 05:24, 19 July 2023 (UTC)[reply]
All good! My reply was meant to just be a bit of fun, but on reflection it might have come across as a bit snide. Sorry about that. I don't have especially strong feelings on italics vs. decimal places. – Teratix 06:31, 22 July 2023 (UTC)[reply]

Small state results[edit]

I've been mulling over how this article should present results for smaller states. According to Goot, your ordinary 1000–1500-respondent monthly poll normally gets something like 100 WA, 75 SA, and 25 Tasmanian respondents – not really enough to give reliable results for these states, but some pollsters will report them anyway, and usually without giving per-state sample sizes. If we just report these blindly, you end up with a lot of unreliable samples flooding the table because the more careful pollsters will either withhold their small-state samples or make them larger by combining them with another month's polls.

To counteract this, I propose adopting a standard of only posting WA/SA/Tasmania results where we can be confident the sample is large enough – i.e. two-/three-month compilations, individual state polls, or unusually large national polls. This is a reasonably big decision because it would entail removing a few existing results from the page and I imagine it might clash with newer editors who just come to add the latest result, so I'd like to get consensus from some of the regular editors here if possible? (@5225C:, @Nford24:, @Thiscouldbeauser:, @Jltech:, @La lopi:, @GMH Melbourne:) – Teratix 15:42, 29 April 2023 (UTC)[reply]

Oh, and while we're at it we might as well clear up that slow-motion edit war over whether a 50% result should be formatted as a "yes". I think Nford's point is fair enough, if the referendum received exactly 50% of the vote it would fail, so I'd be in favour of dropping the green highlights for those results. Probably a better solution is just to do what's been done in the Brexit polling article: softer shade for pluralities, harder shade for majorities. – Teratix 15:49, 29 April 2023 (UTC)[reply]
Seems reasonable enough to me. 5225C (talk • contributions) 01:58, 30 April 2023 (UTC)[reply]
I agree it seems reasonable. The Information I’ve seen from the AEC suggest that in a single majority 50% would pass but not in an double majority at the state count. Nford24 (PE121 Personnel Request Form) 05:43, 30 April 2023 (UTC)[reply]
5225C and Nford, what about the other issue of how to present small-state results? 5225C, I see you've added back the Morgan small-state results, so I gather there might be disagreement on some level – what are your thoughts? – Teratix 08:38, 30 April 2023 (UTC)[reply]
There's nothing in the source that discusses state by state sample sizes. It would be unreasonable to simply assume they are distributed a certain way (possibly OR). 5225C (talk • contributions) 04:06, 1 May 2023 (UTC)[reply]
OR is about inventing or implying something in an article that's not said by the sources. It doesn't preclude us from making basic editorial decisions about these sources' reliability.
But if you need a source, there's a passage in the Goot source (p. 7–8) which discusses the characteristics of state-level data for these polls, specifically saying "in most of the states, most of the pollsters had too few respondents" and estimating for a 1000-person poll that "in Western Australia, the number of respondents would have been around 100, in South Australia around 75, and in Tasmania around 25". – Teratix 04:41, 1 May 2023 (UTC)[reply]
So instead of us guessing, Goot can guess for us? Is there anything that would make us accept Goot's guess as good enough for our purposes? Couldn't this lead to us rejecting state results which might be significant, just because we chose to use Goot's guess? This seems quite arbitrary to me. 5225C (talk • contributions) 05:22, 1 May 2023 (UTC)[reply]
Well, Goot is among the leading academic experts on Australian opinion polling, so yes, I think we have grounds to accept his estimates as reliable for our purposes. Plus, examining real polls with similar overall sample sizes, where the individual sample sizes for small states have been provided, they accord with Goot's estimates (e.g. March Essential, n = 1124, has n(WA) = 111, n(SA) = 87; February Essential, n = ~1000, has n(WA) = 97, n(SA) = 73, and so on).
I don't think there's much risk of rejecting significant state results, because the whole reason to avoid presenting results from tiny sample sizes is that random and meaningless shifts become apparently extremely significant, e.g. looking at the Morgan data on Tasmania there has allegedly been a 30%(!) swing away from the Yes vote between December and April, but in reality this is because those polls have almost certainly surveyed only about 30 Tasmanians and so random changes in which Tasmanians are considered in the survey have an outsize impact on the results.
Presenting this small-state data and pretending this swing is actually meaningful is more misleading than just giving the national-level data. – Teratix 07:24, 1 May 2023 (UTC)[reply]
(5225C, this discussion has gone cold somewhat – what about a compromise where we present the data but note the views on its unreliability?) – Teratix 13:28, 16 May 2023 (UTC)[reply]
Sounds good to me. 5225C (talk • contributions) 13:45, 16 May 2023 (UTC)[reply]

Polls Graphs[edit]

Since Mid April Wikipedia has had a issue with polls saying Graphs are temporarily unavailable due to technical issues. I was just wondering to deal with this issue can somebody make a Poll Map which won't have a problem like the graphs we current have with.La lopi (talk) 04:57, 30 April 2023 (UTC)[reply]

I've been tinkering with the R code for similar polling graphs in other articles to try and get something that works here. I've got a graph that looks pretty good, but my programming skill in general, let alone my R knowledge, is basically non-existent, and the code overall feels a bit hacky, so I'm not confident it's reliable enough to put in an article yet. Is there somewhere we can post a call-out for an editor with a bit more technical know-how? – Teratix 08:46, 30 April 2023 (UTC)[reply]
You can try https://en.wikipedia.org/wiki/Wikipedia:Graphics_Lab/Map_workshop .La lopi (talk) 12:29, 30 April 2023 (UTC)[reply]
Update: I couldn't find anywhere on that page that catered for graphs in particular. In the meantime, I've put the graph I've made in the article – not perfect, but better than nothing at all. – Teratix 15:43, 6 May 2023 (UTC)[reply]
Graphs including an undecided option are in as well – there's a bit of an inelegant bump in the post-election graph caused by weighting the YouGov poll quite highly, but that should even out as more data comes in. – Teratix 03:26, 7 May 2023 (UTC)[reply]
Hi Again, Thanks for doing that, just wondering can you make a map for each state as well. Thanks again.La lopi (talk) 10:10, 10 May 2023 (UTC)[reply]

Support by party[edit]

First off, thank you 5225C for compiling this data, it's hard work to trawl through all those polls and definitely the table is something the page was sorely lacking. Having said that, I don't think we need separate columns for independents or UAP supporters – too few polls have separated these respondents out to justify the extra space required for the columns. A column for One Nation is also close to overkill. We can handle these special cases with footnotes noting the additional party results. Thoughts? – Teratix 13:25, 16 May 2023 (UTC)[reply]

  • I can agree with removing UAP since they've only been independently recorded once, and they are on the verge of no longer existing. The others all have enough data for a column. I personally wouldn't remove independents: the independent movement is much more cohesive now and as can be seen in the chart, they do differ from the "other" grouping substantially. I would definitely disagree with removing PHON since they are active and prominent in the No campaign. 5225C (talk • contributions) 13:50, 16 May 2023 (UTC)[reply]
    5225C, it's not really about how active or cohesive a party or movement is, it's about whether there is sufficient data for a column.
    For independents, there are only four polls listing results: two Morgan, an IPA and a Compass. The IPA poll, which is already skewed, specifically lists teal independents rather than independents in general, and isn't really a poll about the referendum because it didn't ask about the Voice as a constitutional change. So I would say there are actually only three such polls, coming from only two firms. We also don't know much about the sample sizes involved. Compared to the healthier amount of data available for the major parties and the Greens, the data we have for independents is negligible.
    One Nation has a little more data from the Australia Institute and 2018 Newspoll (which, similar to the IPA poll, is also arguably not a true referendum poll). I think we're still scraping the bottom of the barrel here, but I'm happy to compromise and leave this in if we take out independents. We could still leave footnotes for interested readers to signal the few polls listing results for independents. – Teratix 05:15, 22 May 2023 (UTC)[reply]
    I understand your point regarding independents. Footnotes seem a reasonable solution. 5225C (talk • contributions) 07:07, 22 May 2023 (UTC)[reply]

Adjusting graph methodology[edit]

I've updated the graphs for the May polling that's come in, but I would like to make some adjustments to how they're constructed:

  • Restricting the graphs to post-election polls (post-May 2022): i.e. removing the 2017-onwards graph. There has simply not been enough pre-election polling to derive a meaningful trend, the election was a critical point in the Voice debate's progression which is reflected in how many more polls have been conducted afterwards, this would be in line with other poll aggregations I've seen (The Age/SMH and Bonham).
  • Restricting the three-option graph to questions that do not probe for leanings: firms' methodologies for presenting undecided voters can be very different, which is why the undecided trendline makes so little sense. The main culprit here seems to be YouGov and its brand Newspoll, which follows up undecided voters with a "leaner" question prompting them again to make a decision, which is why their undecided figures are unusually low. Restricting the three-option graph to non-probing questions would ensure more consistency.
  • Aggregating three-option questions that do probe for leanings onto the two-answer graph: following on from the previous point, if we're going to exclude YouGov/Newspoll from the three-option graph, we shouldn't just throw out the data but at the same time there's not enough for its own graph, so the least bad option is to convert it to a two-answer-preferred basis (can just do this by excluding undecideds and recalculating percentages accordingly) and have it on the two-answer graph instead.

TL;DR: The practical effect of this proposal would be to remove the 2017-onwards graph and move YouGov/Newspoll results to the two-answer graph. I will probably just do this if there's no opposition. – Teratix 06:54, 20 May 2023 (UTC)[reply]

Territory and electorate polling[edit]

@Teratix I think that we should include territory and electorate polling, even though only states contribute to passing a referendum (i.e to pass a referendum on Australia you need to have over 50% in favour nationwide and over 50% in favour in at least four of the six states), because A: it's interesting and factual and B: we include this stuff on plenty of other articles (e.g Opinion polling for the 2022 Australian federal election). I've added it back. Any thoughts? Thiscouldbeauser (talk) 08:20, 17 June 2023 (UTC)[reply]

Thanks for starting this discussion.
Because Wikipedia is not merely a statistical repository, our criteria for including data can't merely be whatever is interesting and factual – it has to demonstrate significance. Despite the dozens of polls conducted on the referendum, very few have reported territory or electorate results. Even when these results have been reported, they haven't received much (if any!) academic or media commentary. Simply put, our sources don't consider these results significant, and including them in the article risks giving undue weight to a small number of polls.
Aside from these broad problems, there are specific problems with the particular polls in the tables. YouGov used multilevel regression with poststratification to produce their electorate results. What this means is, rather than a Sydney result coming exclusively from a survey of Sydney respondents, YouGov combines this with the national result adjusted for Sydney's demographics. Now, this doesn't mean the result is unreliable (MRP has performed well in the past), but it's misleading to claim the Sydney result is an electorate poll when it actually incorporated national results.
Additionally, The Uluru Dialogue, a pro-Yes group, commissioned the poll. I don't necessarily have a problem with including polls commissioned by groups affiliated to one side, but it is suspect when a Yes-commissioned poll nearly monopolises these categories and, again, risks giving the results undue weight.
Polls from MPs surveying their constituents, like Pat Conaghan's, are likely to receive a disproportionate share of responses from respondents aligned with their MP.
Regarding your second point, I don't believe an analogy to polling from federal elections is valid. Electorate polling is much more significant in federal elections because each electorate contributes an MP, so every result matters. This is not the case at referendums. Accordingly, many more electorate polls are conducted for federal elections and there's much more commentary on these polls.
For these reasons and because the onus is usually on the editor adding new material to justify their change, I've removed the territory and electorate polls again. – Teratix 11:43, 17 June 2023 (UTC)[reply]

Aggregating undecideds[edit]

Recently I removed the graphical aggregation that included an estimate of the trend in undecideds, on the grounds pollsters' methods for putting a number on "undecideds" were so different that an aggregations would either be too inclusive and therefore misleading, or too exclusive and therefore in thrall to too small a number of pollsters. Danielcstone spoke up on my talk page to advocate restoring the graph, where I've given a more detailed account of what I believe the difficulties are. Posting here as well for comment. – Teratix 19:53, 25 July 2023 (UTC)[reply]

New Newspoll, Monday 7th August. They’re 3-weekly.[edit]

Wouldn’t someone with access through their paywall like to put it up, please? It’s nigh on 10am, so it’s been out for many hours. TIA to whoever does. Boscaswell talk 23:54, 6 August 2023 (UTC)[reply]

Today's Newspoll is an aggregate of previous results and does not present new data. 5225C (talk • contributions) 04:01, 7 August 2023 (UTC)[reply]
Notably it's aggregating June–July data when they've already aggregated May–June, i.e. they're doubling up, probably just to get something out the door on schedule. – Teratix 06:41, 7 August 2023 (UTC)[reply]
YouGov (who conducts Newspoll) is reportedly a mess at the moment, most of their Australian polling personnel left the company between their June and July polls, including their head of department. I wouldn't be surprised if the August poll is delayed or even hasn't been run in the first place. – Teratix 06:38, 7 August 2023 (UTC)[reply]
Thanks, 5225C and Teratix, and my sincere apologies for the misconception, that there was a new survey. Boscaswell talk 17:41, 7 August 2023 (UTC)[reply]
It's alright, a new survey would have been expected under ordinary circumstances. – Teratix 18:13, 7 August 2023 (UTC)[reply]
Teratix thanks. Sorry to harp on about this, but about their aggregation of previously reported data: from what I’ve been able to gather, the data has now been presented state by state? If so, we haven’t tabulated it. The text at the top of The Australian's report, which is all I’m able to see, says, "Support… has fallen below 50% in every state". Boscaswell talk 21:34, 10 August 2023 (UTC)[reply]
It's true we haven't used the most recent tabulation, but we have already tabulated the May–June data. The most recent analysis tabulates May–July (not June–July as I previously stated), so if we were to use the most recent report we would be effectively doubling up on the May–June data. I would rather wait until YouGov gets their act together to see if they release any non-overlapping aggregations later on. – Teratix 02:07, 11 August 2023 (UTC)[reply]

Essential state data[edit]

The Essential report released on September 5 provides two lots of state data: the "Support for Voice to Parliament" section gives % support for NSW, Victoria, and Queensland, and the "Voice to Parliament Voter Strength" section gives % hard yes, % soft yes, % undecided, % soft no and % hard no for NSW, Victoria, Queensland, Western Australia and South Australia. Uwium originally added the latter data to the state polling section (combining % hard and % soft to calculate overall percentages). However, 124.170.25.101 later overwrote this with the first lot of data.

I have opened this section to resolve the dispute. Personally, I favour using the second lot of data because it is more comprehensive: it gives percentages for yes, no and undecided, and includes South Australia and Western Australia. – Teratix 07:35, 5 September 2023 (UTC)[reply]

Regarding the Essential Polling published on 5 September I noticed that in the general polling when undecideds were forced to make a binary choice, they split evenly between yes and no. In the secondary "Strength of preference" question, it seems that the undecideds almost all went to the no side. This doesn't make sense statistically and suggests persuasion bias in how the second question was asked. That all previous Essential polls have skewed +3% toward yes compared to the trend line reinforces that suspicion, as the bias is also in favour of the yes campaign. Perhaps inclusion of both is a more informative and balanced presentation. — Preceding unsigned comment added by 124.170.25.101 (talk)
OK, I think I've worked out what's going on here – it's a little complicated and is partially because of Essential's ambiguous data presentation, so please bear with me.
So the first thing to note is Essential no longer asks a forced-choice question on the Voice, but rather a standard three-option question with a leaner, much like Newspoll. This means that despite appearances, the state data in the "Support for Voice to Parliament (by state)" question is not from a binary choice. That is, when that question reports e.g. 34% support in Queensland, that does not automatically mean 66% opposition in Queensland. Rather, % opposition + % undecided = 66%. That's why it looks like "the undecideds almost all went to the no side" – they didn't, it's just a quirk in how the data has been presented.
The actual composition is revealed via the second question's data. Although it asks about strength rather than intention, we can infer what the first question's results must have been by summing the hard and soft views. So for Queensland: 27% hard yes + 8% soft yes = 35% yes, 50% hard no + 8% hard no = 58% no, and 8% undecided. (Yes, 35% ≠ 34%, this is probably a rounding error). – Teratix 10:03, 5 September 2023 (UTC)[reply]

Redbridge State figures for NSW: 61% Yes. Really?[edit]

I can’t see figures for NSW in the citations. 61% Yes looks very wrong. Boscaswell talk 19:46, 9 September 2023 (UTC)[reply]

It is. Support for the Voice to Parliament is in “freefall”, having dropped 5 per cent in a month, and is now below 40 per cent in every state except Victoria. StAnselm (talk) 21:17, 9 September 2023 (UTC)[reply]
I'm guessing it's 39% Yes / 61% No, but as you say - it's not in the citations. StAnselm (talk) 21:20, 9 September 2023 (UTC)[reply]
StAnselm if it’s "below 40% in every state except for Victoria" then it's highly likely that the figures have been transposed. In any case, it would be mathematically impossible for it to be 61% Yes in NSW when it’s 61% No overall and the other big states are both No's. So we need to reverse those figures? Boscaswell talk 21:27, 9 September 2023 (UTC)[reply]
I lean towards removing them altogether if there is no reference for them. StAnselm (talk) 21:28, 9 September 2023 (UTC)[reply]
I agree. Boscaswell talk 21:34, 9 September 2023 (UTC)[reply]
Looking at the other State figures, they both look a bit suspect, as well. The Redbridge binary percentages each moved by 5, but those for Victoria stayed the same and those for Qld only moved by 2. Vic and Qld being, together with NSW, the most populous states by far, I have some doubt that those figures are correct. Boscaswell talk 21:46, 9 September 2023 (UTC)[reply]
Feel free to double-check (p.8), I'm pretty confident they are correct. 5225C (talk • contributions) 02:28, 10 September 2023 (UTC)[reply]

Yes, correct, 61% was right, just in the wrong spot. ;) Boscaswell talk 03:31, 10 September 2023 (UTC)[reply]

Meaning of colours[edit]

The tables have some cells coloured. Sometimes dark, sometimes light, different columns. It's not at clear what the colours mean and what information is trying to be conveyed. Might be pretty but not helping much. Maybe some explanation for readers (and editors too!) is in order? Alex Sims (talk) 11:47, 12 September 2023 (UTC)[reply]

  • I thought it was fairly obvious, dark colouring is majorities, light colours are pluralities. It visualises the trends in the data instead of having a wall of numbers. This is pretty standard across Wikipedia articles dealing with elections and polling. I don't really see a need to explain this to readers. 5225C (talk • contributions) 12:08, 12 September 2023 (UTC)[reply]
    It’s not obvious, not at all. Imagine that you’re someone new to it all: you’re simply not going to know why the shading is different. I myself had thought that it was large differences that were darker, hence my edit spiel yesterday. And I’m highly numerate. Boscaswell talk 02:54, 28 September 2023 (UTC)[reply]
The table probably needs a key anyway to keep the footnotes under control, seems logical to explain colour as part of this. – Teratix 15:56, 12 September 2023 (UTC)[reply]

Width of state poll results[edit]

Another whinge from me. I have to use horizontal scrolling to see the results I'm looking for in this table. It's really a bit wide, and might be an entry in the Guiness book of records for most columns in a wikipedia table :) . If someone has the energy, maybe have three (sub) rows for each poll for Y/N/DK and this will reduce the number of columns by about two thirds. Alex Sims (talk) 11:52, 12 September 2023 (UTC)[reply]

  • Would sort of prevent people from reading left to right though, which would look very odd. It's impossible to make the table display in its entirety on every single device and browser when there is such a variety of displays, zoom levels, font sizes, etc. Making the poll info columns floating would be a better solution but I don't know how the technical implementation for that works. 5225C (talk • contributions) 12:11, 12 September 2023 (UTC)[reply]
Believe me, I really racked my brain on how else we could usefully display this data without resorting to an übertable – I agree it's wider than ideal! I just thought the other options were worse: for example, the page had originally separate tables and graphs for every state. Although the tables were certainly thinner, they took up a lot of vertical space and meant we had to duplicate a lot of data (e.g. listing dates, firms and references six times instead of once). Sub-rows are another option but they would take away the ability to easily glean the state trends over time by glancing down the columns, and would also break sorting. – Teratix 15:49, 12 September 2023 (UTC)[reply]

"Latest" Redbridge[edit]

Added in today, with 38% Yes. Is this a kind of repeat of the figures from earlier in the month by them? I ask this because in their reporting of it on the Sky News Australia website they say it’s a smaller Yes percentage at 39%, but actually that 39% is the figure reported earlier in the month. Boscaswell talk 04:16, 24 September 2023 (UTC)[reply]

No, this is a new poll, Sky has misreported the figures. – Teratix 12:29, 24 September 2023 (UTC)[reply]
Thanks, Teratix. Boscaswell talk 18:08, 24 September 2023 (UTC)[reply]

Sep 19th Essential State/Party breadown[edit]

Can someone add the Sep 19th Essential report into the State/Party breakdown tables https://essentialreport.com.au/reports/19-september-2023 It's already in the national table. 124.169.128.192 (talk) 05:24, 24 September 2023 (UTC)[reply]

Polling By Age Group[edit]

I have seen a couple polls which have polling results by each age group. I was thinking could somebody add Polling By Age Group into the section of Subpopulation results with Results by state and Results by party affiliation.Muaza Husni (talk) 07:50, 25 September 2023 (UTC)[reply]

The difficulty here is that pollsters are extremely inconsistent about how to divide up the age groups. The article definitely needs to discuss the age-related findings, but I think they're better summarised in prose rather than a table – the data's just too inconsistent to be comparable. – Teratix 11:36, 26 September 2023 (UTC)[reply]
So when do you think somebody could start the age-related findings.Muaza Husni (talk) 04:40, 29 September 2023 (UTC)[reply]
Feel free to do it yourself if you like, just make sure to cite reliable sources and so on. – Teratix 14:22, 29 September 2023 (UTC)[reply]
So how can we deal with the issue of pollsters are extremely inconsistent about how to divide up the age groups?.Muaza Husni (talk) 12:38, 3 October 2023 (UTC)[reply]
You can't, it's just an inherently messy and inconsistent dataset. I don't see how it could ever be presented in a way that would be useful for readers. 5225C (talk • contributions) 13:19, 3 October 2023 (UTC)[reply]

When does the yes/no trend graph get updated?[edit]

It seems overdue, since it was biweekly or so. Greglocock (talk) 04:05, 27 September 2023 (UTC)[reply]

Unfortunately, my computer is in for service until Friday. Next time I'll post the graph code and relevant CSV publicly so others can update even if I'm absent. – Teratix 10:16, 27 September 2023 (UTC)[reply]
Thank you. If you can remember what settings are used in LOWESS? Greglocock (talk) 16:56, 27 September 2023 (UTC)[reply]
Now updated. – Teratix 16:44, 28 September 2023 (UTC)[reply]
It's not plotting right up until the 23 sept Greglocock (talk) 17:23, 28 September 2023 (UTC)[reply]
Forget that, it's a cacheing problem. Greglocock (talk) 22:22, 28 September 2023 (UTC)[reply]

How sensitive should our aggregation be?[edit]

span = 0.7, status quo, "very slight flattening"
span = 0.3, "trending back towards Yes".
span = 0.5, "flattening out".
span = 0.9, "trending towards No".

With the last-minute influx of polling ahead of the referendum, including an unusually Yes-friendly Morgan poll and Essential showing Yes gaining ground for the first time in a poll series since June, I've been experimenting with changing span values in LOESS to see what effect that has on the trendline. Different LOESS spans can have major effects on the sensivity and smoothness of an aggregation, affecting how the "story" of polling is told. Oversensitive aggregations can exaggerate chance fluctuations or outlier polls, but undersensitive aggregations can overlook genuine changes in a trend.

Since August, I've set the span at 0.7, which was about the smallest value that produced a smooth trendline. Smaller values made the trend more erratic without altering its direction, meaning the additional sensitivity wasn't worth much. However, when testing span adjustments more recently, I've found the effects are much more dramatic, even outright determining whether the trend appears to be towards No, Yes or flat. I've uploaded some alternate versions of the graph so you can see what I mean.

Given how influential the aggregation is on our article's polling "narrative", I thought I should open this up to broader discussion on the talk page rather than make a unilateral decision about what value for span is appropriate. – Teratix 02:44, 4 October 2023 (UTC)[reply]

Thank you. That's an interesting study. I must admit these statistical techniques that rely on judgement calls leave me cold. I'll read up on LOWESS and see if there are any clues. Greglocock (talk) 03:44, 4 October 2023 (UTC)[reply]
I think the way I'd do it is to compare the number of points above the trendline in a given month (say) with the number below. These should be equal. Taking September, I think that suggests (by eye) that 0.3 is too aggressive Greglocock (talk) 06:52, 4 October 2023 (UTC)[reply]
Yes, 0.3 looks unduly sensitive and something between 0.5–0.7 looks more reasonable. – Teratix 02:42, 6 October 2023 (UTC)[reply]
I have opted to slightly reduce the span to 0.65, would welcome any further comment. – Teratix 04:53, 9 October 2023 (UTC)[reply]
I think 0.5 is a good balance, 0.3 is far too sensitive and I think 0.7 doesn't show the changes in day-to-day polling. Vizra JR (talk) 07:10, 9 October 2023 (UTC)[reply]
OK, a lot of polls have come in since I started this discussion and there seems to be much more of a consensus on a flattening trend rather than a trend towards Yes or No, and varying span values seems to now have much less effect. So for consistency I will keep the span as is (0.65) unless there are any strong objections. – Teratix 04:06, 13 October 2023 (UTC)[reply]
Thanks for your great work on this.Greglocock (talk) 04:59, 13 October 2023 (UTC)[reply]

Thank you[edit]

appreciated this resource over the last 3 months. You nailed it 110.54.162.239 (talk) 11:32, 14 October 2023 (UTC)[reply]

Moving forward[edit]

We collated, to my knowledge, the most comprehensive public repository of polling data on the referendum.

Thank you to Canley – the aggregation in this article used a version of his aggregation code for Australian federal elections. In the absence of a functioning graph extension, it was an invaluable resource. This article's usefulness would have been much diminished without Canley's work.

Thank you to all the anonymous, new or experienced editors who chipped in to add a poll, citation or correction, gave feedback on the talk page, or simply expressed support. It's this ability for anyone to instantly update and enhance articles that puts Wikipedia a cut above other places on the internet. Particular thanks to 5225C, who collated many polls and most of the party affiliation data.

No doubt, in the aftermath of the referendum, the polls will undergo thorough and extensive academic analyses. Moving forward, I hope we can use these analyses to contextualise the data and, ultimately, get this page to featured list status, which as far as I'm aware would be unprecedented for an opinion polling list. – Teratix 14:54, 15 October 2023 (UTC)[reply]

Newspoll final[edit]

The Australian published a two answer figure of 40 yes and 60 no on the eve of the referendum day. This clearly trumps any calculation which anyone can make using the three answer alternative because no one editing Wikipedia has access to the original data the pollster used. It’s pathetic that self appointed wiki heroes keep overwriting the figures published in the Australian and approved by the pollster themselves as stated here [1]https://x.com/pyxisinsights/status/1717907159516840233?s=20 please stop changing this as you’re making this page a joke. 159.196.170.65 (talk) 14:16, 27 October 2023 (UTC)[reply]

This has got to be up there with the absolute lamest edit wars I've ever seen on Wikipedia. Seriously, it's an argument over whether we should round an opinion poll to 39 or 40, when the referendum has been done and dusted for weeks. Did it really need nine back-to-back reverts? – Teratix 03:42, 28 October 2023 (UTC)[reply]