Hi. I have been away for some days recovering from a laptop disaster.
I have reinstalled Python 2.7.14
Now, when I right click a Python program and select Edit with IDLE, nothing happens (no edit window, no message, nothing).
Any ideas? SemperBlotto (talk) 19:32, 2 February 2018 (UTC)[reply]
"Try deleting the contents of the .idlerc folder in your profile. To open the folder just type and enter %USERPROFILE%.idlerc." DTLHS (talk) 19:59, 2 February 2018 (UTC)[reply]
I reinstalled pywikibot. Now, when I try running the bot, I get:-
Traceback (most recent call last):
File "C:\Python-it\itnouns.py", line 7, in <module>
import pywikibot, config
File "C:\Python-it\pywikibot\__init__.py", line 15, in <module>
from textlib import *
File "C:\Python-it\pywikibot\textlib.py", line 17, in <module>
import wikipedia as pywikibot
File "C:\Python-it\wikipedia.py", line 7559, in <module>
get_throttle = Throttle()
Hi! We (@Isbms27 and myself) would like to propose a single unified system of usage labels for wiktionaries in all languages. Such labels exist in some wiktionaries (English, Russian, etc.), their use it not systematic, and in some other languages they are not used at all.
We suggest the following categories, the tags are taken as an example from English wiktionary:
1. Usage/Register/Stylistic:
neutral / colloquial / informal / formal / slang / jargon / nonstandard / familiar / periphrastic / official / vulgar / taboo / obscene
2. Speakers:
to specify special social groups (by age, gender, social status, occupation (for slang / jargon), etc.)
3. Academic subject area:
chemistry / biology / zoology, etc. (for terminology only)
4. Regional/Geography:
American English / Australian / etc.
5. Temporal:
dated (outdated) / archaic / obsolete / neologism / historical / hot word / nonce word
6. Expressiveness:
approving / disapproving / humorous / ironic / offensive / euphemism /
7. Word type:
abbreviations / acronyms / initialism
Although many of these are defined in the Appendix:Glossary for English, not all of them are because they are quite subjective. The proposal is pan-project; how would these concepts be normalized across the languages involved?
What qualifies as academic subject area? e.g. Pharmaceuticals, particularly trade marks like Viagra (or, more controversially, Aspirin, which is a trade mark in most of the world other than the USA) which are not a chemical name or pharmaceutical compound but simply a brand. - Amgine/t·e18:30, 3 February 2018 (UTC)[reply]
I completely agree that particular concepts of some categories (esp. Register, Temporal and Expressiveness) should be normalized across languages; we used existing tags from English wictionary as an example, Russian tags are also quite messy and I could find no system for Spanish or German. However, fixing language specific tags is the second step.
The first and the main step is to introduce a uniform set of categories as a fixed template for all languages and to encourage users, 1) to specify as many categories as possible for a particular word-usage, 2) to use the same pattern describing words in different languages.
We already have it in grammar description: for every word part of speech is specified, etc. It will be useful to have this for 'semantics'/'usage' information as well.
For example, at Wikimedia Pre-hackathon in Olot they discuss an idea about the integration of lexical wikidata into some machine translation systems. These usage-labels if uniformly presented in all languages and integrated can help to choose translation equivalents. —Isbms2710:30, 4 February 2018 (UTC)[reply]
<nod> There certainly would be work to do. But I was asking how you would address normalizing the categories. It is the part which seems most difficult for Wiktionary.
Also, we don't have a part of speech for every word -- POS concepts don't apply very well to Chinese terms, for instance, and Lojban is just plain wacky, and there are aspects of Japanese that haven't been tackled yet, and the whole issue of idioms has gone back and forth a few times, among many other issues. Looking at the numbered list above, some aspects are rather confusing -- #3 says "for terminology only", which is very ambiguous given the context.
While I support the underlying idea (semantic tagging for better correspondences across languages), I caution planners that this is a vastly complicated issue. Please do not expect rapid progress. :) ‑‑ Eiríkr Útlendi │Tala við mig18:41, 6 February 2018 (UTC)[reply]
In my opinion there are certain elements here which are broadly accepted - for example the temporal tags. I believe most languages of Wiktionary will have the concept, e.g. Catégorie:Langage désuet, Kategorie:Zastaralé_výrazy, etc., of dated/archaic terms. Those should be immediately implemented as they serve as exemplars how this project can work inter-language. - Amgine/t·e17:21, 7 February 2018 (UTC)[reply]
Old English noun declension templates are woefully incapable of presenting what should, by all rights, not be too complex. For example, see feond for an entry where each form has to be specified because the templates can't handle it. If someone could Luacise it so that it worked like our Latin declension templates, that would help a great deal. @Rua, JohnC5 as people with a likely interest. —Μετάknowledgediscuss/deeds21:38, 4 February 2018 (UTC)[reply]
@Esszet: There a link after the period even in the desktop version, though it's invisible: for instance, <spanclass="interProject"><ahref="https://en.wikipedia.org/wiki/elephant"class="extiw"title="w:elephant">Wikipedia</a></span> in elephant. Not sure what its purpose is. The CSS file for the desktop version, MediaWiki:Common.css, hides it, while the CSS for the mobile version, MediaWiki:Mobile.css, doesn't. — Eru·tuon05:24, 6 February 2018 (UTC)[reply]
Done. Let me know if there are any unwanted side effects; I looked for anything else that used that class, and didn't see anything that seemed likely to cause problems. - -sche(discuss)18:29, 6 February 2018 (UTC)[reply]
On jocose, I added a citation, originally with {{cite-book}} but then changing it to {{quote-book}}. In both, the only date I used was the year (1886) but when I use the latter, correct template, it displays "1886. February 5." Where is it getting the date??? —Justin (koavf)❤T☮C☺M☯18:04, 5 February 2018 (UTC)[reply]
The quotation templates uses PHP functions to parse dates. This is unfortunate because PHP will never throw an exception, ever, no matter what garbage input you give it. So if you give it a date of "1886" it will attempt to fill in the other parameters of the date with the current date. Anyway, if you just want a year, use the year parameter. Only use date if you have a specific date. DTLHS (talk) 18:13, 5 February 2018 (UTC)[reply]
Latvian (lv) has most of the scholarly tone diacritic removal/replacement rules covered (thanks to whoever did this) but it would be awesome to get rid of a spelled out <uo> diphthong (just o in standard orthography) but one of the two letters (seems to vary by author which one) gets a tone diacritic.
To avoid having to make any complicated "logic statement" I could spell out all of them (not that many because it's not possible for 2 different tone marks to be within the same diphthong):
But the part with macrons should only apply to <uo> sequence because macron is a legitimate diacritic (technically it's not even a tone mark but, I guess, they use it as a replacement for tilde, since <uo> is not part of orthography to begin with, that prevents any confusion, I suppose.) Neitrāls vārds (talk) 03:01, 8 February 2018 (UTC)[reply]
I guess it makes sense that replacing a sequence would be more tricky than a single character...
However, uo is not a variant spelling only a "dictionary notation convention" (for lack of a better term), to my knowledge there isn't any spelling tradition (that has seen any use) that would spell out uo's, only a convention that is used in headword lines of more specialized dictionaries. Neitrāls vārds (talk) 18:14, 9 February 2018 (UTC)[reply]
Modern orthography, conceptualized at the turn of 19th/20th centuries but introduced after WW1 (1918-ish?) Initially the plan was for it to spell out uo's but it didn't materialize (rightly so if you ask me, when every (native) o is implicitly uo, what's the point of explicitly spelling them out, but I digress.) So, outside of dictionaries it has never really been used. Neitrāls vārds (talk) 18:14, 9 February 2018 (UTC)[reply]
The issue is that we strip diacritics to make linking easier, but not as a tool to normalise an orthography. The problem is not a technical one, but that this is an inappropriate application. —Μετάknowledgediscuss/deeds18:38, 9 February 2018 (UTC)[reply]
So there has never been a precedent? I.e., a language actually adding extra letters for their dictionary notation as opposed to just adding diacritics (every example I can think of falls in the latter category actually.)
not as a tool to normalise an orthography – as I outlined above uo has never been part of any orthography tradition, only "dictionary notatation" / faux transcription.
Latvian is not that relevant in etymologies (Lithuanian can usually do the same job while being more archaic) but looking forward how can this problem be tackled? Suppose I magically fix all the links right now, then the year 2020 rolls over and there are another 200 red links when there are perfectly fine entries that they should land on, say, for example, link for uozuols when there's ozols (and the former is not a valid form attestable in prose, only in dict headword lines.) It's not that I care that much but it sounds like something one would constantly need to look after. Neitrāls vārds (talk) 20:57, 9 February 2018 (UTC)[reply]
I agree this is a problem that should be resolved. I recall this kind of thing coming up before (there was a discussion of it involving msh210 and Rua—CodeCat at the time). - -sche(discuss)05:27, 9 February 2018 (UTC)[reply]
That is what would normally appear if an inflection has the same spelling as the main entry. I don't think it should be done away with. DonnanZ (talk) 16:45, 9 February 2018 (UTC)[reply]
We don't normally create inflection entries if they are the same spelling as the main entry (I don't anyway) so this case is a little odd. DonnanZ (talk) 19:12, 9 February 2018 (UTC)[reply]
@Erutuon, Rua, DTLHS or anyone else who might know: can we find an actual, general solution to this? Simply removing templates as was done on [[acquit]] seems like an undesirable and unmaintainable approach that only "fixes" individual entries as they crop up. - -sche(discuss)19:24, 11 February 2018 (UTC)[reply]
@-sche: I had taken a look at the HTML, and thought it was because two CSS selectors were both emboldening the same word: the strong tag and .form-of-definition-link.mention class selector. But that when I look at it with browser-internal styles displayed (I'm in Firefox), it's clear that the reason is slightly different: <strong>, <b> tags have the rule font-weight:bolder;, which means that the "form of definition mention" text, which is already bold, is made even bolder by the <strong> tag. One solution would be to override this browser-internal rule with strong,b{font-weight:bold;} in MediaWiki:Common.css, though I'm not sure if that's the best solution. — Eru·tuon20:54, 11 February 2018 (UTC)[reply]
Is there any way to standardize the css classes to only use either strong or bold in order to match what the wikitext produces? Having both floating around seems destined to create oddly-distributed coincidental combinations. Chuck Entz (talk) 21:44, 11 February 2018 (UTC)[reply]
I don't quite understand your question, because strong and bold are HTML tags and have nothing to do with CSS rules applying to classes. I might not have explained things well. In the entry acquit, a selflink <strongclass="mw-selflink selflink">acquit</strong> (acquit) was generated from the wikitext [[acquit]]. The strong tag, as well as the b tag generated by wikitext bolding syntax, has the CSS property font-weight:bolder; applied to it by my browser, and apparently by other people's browsers. The selflink was wrapped by <spanclass="form-of-definition-link"><iclass="Latn mention"lang="en">...</i></span>, and MediaWiki:Common.css applies the property font-style:bold; to this configuration of classes. So acquit starts out bold because of the Wiktionary CSS property and becomes even bolder because of the browser-internal CSS property. (The resulting bold value is 900 according to my browser: font-weight:bold; + font-weight:bolder; = font-weight:900;.) — Eru·tuon22:30, 11 February 2018 (UTC)[reply]
How would such a line in the css interact with e.g. the lines that specify that "bolded" Hebrew has a normal (non-bolded) font weight and is big instead? Would it override them and cause Hebrew to be "bolded"? If not, that sounds like a good fix. What did we do to fix the "fishbone" problem linked above, of self-links on the headword line being double-bolded? - -sche(discuss)01:18, 12 February 2018 (UTC)[reply]
@-sche: Based on this article, CSS selectors that include class names (.Hebr) will have precedence over those that only include tag names (b,strong). So the Hebrew-related styles will behave in the same way.
When I test it in my browser, b,strong{font-weight:bold;} fixes the double-bolded headword selflink problem too, so it could replace the rule that currently fixes the problem, b.selflink,strong.selflink{font-weight:inherit;}. I wonder if there are any cases in which Wiktionary needs any levels of bolding besides normal and bold. — Eru·tuon04:56, 12 February 2018 (UTC)[reply]
@Vahagn Petrosyan: I see that you have gone ahead and renamed it, which is probably not a good idea until we're ready to switch everything over. As DTLHS notes, a big part of it will be the categories, so if you want to start moving those over, now would be a good time. —Μετάknowledgediscuss/deeds21:26, 9 February 2018 (UTC)[reply]
Can anyone please help me in removing transliteration of Urdu, Persian & Arabic languages from Urdu Wiktionary? The imported modules there cause Urdu entries to say "transliteration needed" but that shouldn't be necessary because it's the Urdu Wiktionary. — Bukhari(Talk!)11:55, 12 February 2018 (UTC)[reply]
UPDATE: I added three rows for the noun forms, but not the columns for the names of the grammatical cases, which are "nominatiu, genitiu, datiu, acusatiu" in Occitan. --Lo Ximiendo (talk) 13:38, 13 February 2018 (UTC)[reply]
That API endpoint was added to permit dictionary lookups from within the Android Wikipedia app (documentation). I'm not involved in the development of the API, I only suggested that our templates generate some extra markup to facilitate the parsing. I'm not sure the endpoint is still maintained/developed at the moment. The discussion on phabricator has stalled in any case. – Jberkel23:16, 13 February 2018 (UTC)[reply]
@Jberkel do you think it is worth submitting a bug? If so would you mind doing so? A little familiarity with the project goes a long way in making bugs meaningful. If you would rather not I can take a stab at it. - TheDaveRoss13:23, 14 February 2018 (UTC)[reply]
Is there any way to see a complete list of all languages that have a given language X as their ancestor? For example, both German and Yiddish have Middle High German as their immediate ancestor, while Cimbrian has Bavarian as its immediate ancestor and MHG as a more distant ancestor; is there any convenient way to see a complete list of languages that have MHG anywhere in their ancestor tree? —Mahāgaja (formerly Angr) · talk15:36, 14 February 2018 (UTC)[reply]
No, "derived terms" categories (for all languages) include all forms of derivation, including inheritance and borrowing, whereas "borrowed terms" are only form terms borrowed directly. Consider e.g. French terms derived from Latin, which includes borrowed terms and a large inherited vocabulary. But the distinct also holds for Bashkir; a word borrowed into Bashkir from, say, English, which borrowed it from French, which inherited it from Latin, which borrowed it from Arabic, is thus a Bashkir word which is ultimately derived from Arabic, but it's not a Bashkir word borrowed from Arabic. (The category boilerplate text could be expended to explain this better, IMO.) - -sche(discuss)09:46, 15 February 2018 (UTC)[reply]
I'm struggling to think of any English verb which couldn't be conjugated in the present participle as fooing but which also isn't a gerund-style noun at the same time. The Accel gadget is too intricate for me to tinker with it, so can someone please add the gerund forms to it? —Justin (koavf)❤T☮C☺M☯17:56, 15 February 2018 (UTC)[reply]
I'm proposing the addition of another field to {{Module:languages}} data which is the Wikidata item for that language. This would supersede the wikipedia_article property, since these links could easily be generated from the Wikidata item id. – Jberkel09:54, 16 February 2018 (UTC)[reply]
I support that idea. It will have to be done with great care, though, as some of our languages may not map as intuitively as you'd expect to Wikidata items, and many won't have items at all. —Μετάknowledgediscuss/deeds17:35, 16 February 2018 (UTC)[reply]
@Victar: What is the problem? It looks fine to me. I see an asterisk and an Avestan word, right-to-left, followed by a space and transliteration in parentheses, left-to-right. — Eru·tuon20:46, 19 February 2018 (UTC)[reply]
@Victar: Wow. So you are seeing several spaces between the reconstructed Avestan and the opening bracket, while I am seeing just one. I am using Firefox Quantum 59, but I just viewed this page in Chrome 64 and saw this spacing problem. It seems to be related to the unicode-bidi:embed; CSS property in that is assigned to Avestan in MediaWiki:Common.css. If I remove that property in the developer tools (right-click on the text and click "Inspect"), the text displays with only one space, but the asterisk is then on the left side. In fact, when I switch between the different unicode-bidi property values, the property values that put the asterisk on the right (where it should be) also have the spacing problem. That's got to be a bug. Something about including an asterisk in right-to-left text is screwing things up. — Eru·tuon21:54, 19 February 2018 (UTC)[reply]
@Erutuon: Good to know it's not broken cross-platform. What if we filtered out the asterisk from the Avestan text and then add it back with CSS, something like .Avst::after { content: "*" }? --Victar (talk) 22:17, 19 February 2018 (UTC)[reply]
@Victar: Interesting idea. I tried it in the developer tools (through JavaScript) and it does work, though I had to modify the selector to .Avsta::before to get the asterisk to display inside the link and in the correct position (on the right side). So it amounts to removing the asterisk and then adding it back. Heh. — Eru·tuon03:06, 20 February 2018 (UTC)[reply]
Previous discussion of our bumping into the Lua memory limit, including a rejected phabricator request to raise that memory limit: WT:GP/2017/April § water is broken.
Weird. I would't expect this to consume that much more memory. Or there's something seriously wrong inside the wikidata extension. – Jberkel23:11, 19 February 2018 (UTC)[reply]
It's probably just that there is a delicate balance with the memory that the modules use- many pages are right on the edge and any addition can put them over. DTLHS (talk) 23:18, 19 February 2018 (UTC)[reply]
I removed the sitelink lookup and it still fails. If there's something seriously wrong I'd expect a lot more pages to fail. – Jberkel23:43, 19 February 2018 (UTC)[reply]
Hm, it's just a few extra bytes per language, but given the number of languages it could add up to something like 200kb, assuming that all data modules get loaded. – Jberkel23:50, 19 February 2018 (UTC)[reply]
One solution could be to mirror the language data into another module with only the wikidata IDs mapped to our language codes. DTLHS (talk) 23:52, 19 February 2018 (UTC)[reply]
@Jberkel: The total memory is probably than 200 KB. I don't entirely understand how Scribunto memory works, but this Lua function is an attempt to get a handle on how much memory the new Wikidata items might take up. World of Warcraft wiki says that each table index not in the array part of the table takes up 40 bytes, plus the bytes taken up by the value. And apparently each string uses 24 bytes along with its byte length. So 7447 Wikidata items times 40 bytes = 297,880 bytes; the total of the bytes in each of the strings is 56,306 bytes; then 24 bytes times 7447 strings = 178,728 bytes. Total of all of those, 532,914 bytes. And if any tables had to be expanded to the next larger size (a power of two), that added memory too. So assuming this all is correct, more than 500 KB has been added by the recent edits on any page where all the language data modules are transcluded, even when not considering the memory used by mw.loadData when it wraps the data modules, and by the new getWikidataItem function, and so on. — Eru·tuon00:05, 20 February 2018 (UTC)[reply]
@Jberkel: Given the scope of the memory errors that are being produced and the very limited usefulness of the Wikidata IDs. I would like you to undo your changes to the modules for now. (We can keep the field for the IDs, just not use them.) —Μετάknowledgediscuss/deeds05:27, 20 February 2018 (UTC)[reply]
So, comment out the Wikidata IDs rather than undoing/entirely removing them? (Seems sensible, whether it's what you're suggesting or not.) We still need to address the pre-existing problems of our entries using Lua for so much, e.g. auto-transliteration and the redlink finder, of course. If more memory is used the more codes there are, memory usage will continue to go up for that reason, too, because we are always adding more codes... - -sche(discuss)06:26, 20 February 2018 (UTC)[reply]
Unfortunately, our project is well suited to automation, and I feel that this issue will continuously be coming up. As Meta has mentioned, it seems like we should ask for the a software solution to this problem from the devs, whether that be increasing the memory limit or having them streamline some of our base processes (though I don't know how that might work). —*i̯óh₁n̥C[5]06:32, 20 February 2018 (UTC)[reply]
Le sigh. Ok, I'll undo my changes. I was initially thinking about splitting the data modules into smaller pieces (data/a/a1, /a/a2 etc.) but there will always be some outlier pages which transclude everything, and more pieces also means more overhead (and inconvenience for editors). Another solution could be to increase the memory limit exceptionally for a few high traffic pages (but how would that be set up?). In any case we need to find a "proper" solution soon. We also need better tools for profiling memory usage. I'll start a ticket on phabricator to get some ideas. – Jberkel07:55, 20 February 2018 (UTC)[reply]
@Jberkel: To be fair, if every language is going to have canonical name and wikidata code, couldn't you put those in indices [1] and [2] to save a lot of memory in the language modules? —*i̯óh₁n̥C[5]08:23, 20 February 2018 (UTC)[reply]
Indeed, we're approaching full coverage of the family parameter, so it might make sense to put that in [3] and just assign an "uncategorized" family to those yet to be added. I believe I'm right in thinking that the array memory is much more efficient if used at the declaration time of the table than the hash table, right? —*i̯óh₁n̥C[5]08:33, 20 February 2018 (UTC)[reply]
@JohnC5: Sorry, I don't follow. I don't see how an extra index would save memory here. As Erutuon has indicated, the storage requirements for strings are around 24+length * number of instances. It's difficult to get below this baseline. The table keys should be handled by Lua's string interning and only count once. I'll verify this though to be sure. – Jberkel09:28, 20 February 2018 (UTC)[reply]
@Jberkel: I'm saying that instead, of putting the values of canonicalName, wikidata_item, family under those names entries (i.e. in the table's underlying hashtable), put them as entries [1], [2], [3] of the table's underlying array. For instance, convert:
["zaa"] = {
canonicalName = "Sierra de Juárez Zapotec",
otherNames = {"Ixtlán Zapotec", "Atepec"},
scripts = {"Latn"},
family = "omq-zap",
wikidata_item = "Q12953989",
}
to:
["zaa"] = {
"Sierra de Juárez Zapotec",
"Q12953989",
"omq-zap",
otherNames = {"Ixtlán Zapotec", "Atepec"},
scripts = {"Latn"},
}
This will mean that the table creation is much more efficient for these mandatory entries as well as the lookups and will save memory in that way. —*i̯óh₁n̥C[5]09:43, 20 February 2018 (UTC)[reply]
Ah, ok I misread [1] as missing wiki references, not indexes :). Yes, this should save (3 * 40 bytes (string keys) - 32 bytes (3 int keys) = 88 bytes per entry? I can't believe it's 2018 and we're discussing byte-level optimisations :) – Jberkel10:08, 20 February 2018 (UTC)[reply]
That is a good idea. Another idea is to share script arrays between languages, particularly for {"Latn"}, which is used more than 3000 times (see the "script combinations" table in User:Erutuon/language stuff). That is, define localLatn={"Latn"} at the top and use that in each applicable data table on the page. mw.loadData is clever enough to cache only one copy of the table then. That would in theory save 40 + 16 bytes for every "Latn" script table after the first, plus about 24 + 4 bytes for the string (84 bytes?). I tried it in one module, but didn't notice any effect. I suppose it would save even more, at least in the data module, to use a string, {--[[...]]scripts="Latn",--[[...]]}, instead of an array, but the functions relying on the scripts item would have to be modified. — Eru·tuon10:47, 20 February 2018 (UTC)[reply]
That seems mostly sensible. Latin is by far the most used script, especially for the obscurer lects where no script is yet specified. However, I'd like if we could make a list of languages which don't currently have a script set, before we make Latn the default, so we know which languages we need to check the script of. (Or, add an "undetermined" script code to those languages, which can be converted to specific script codes at leisure.) - -sche(discuss)19:31, 20 February 2018 (UTC)[reply]
@-sche: If you look at the "script combinations" table in User:Erutuon/language stuff, languages with no script are in the None row; there are 3718 of them at the moment. If you sort by the "languages" column, you will see they are the largest group, larger than Latn. — Eru·tuon20:23, 20 February 2018 (UTC)[reply]
Good point; I meant that Latin is the most used script in the world [by number of languages using it], but in our modules, there are still a lot of gaps. But I've filled in a bunch of those gaps; Latin is used by more than half of all the lects we have codes for. But [t occurs to me to back up and ask] would it actually save us any memory to treat Latn as the default, or would the same amount of memory still be used just by the check that would be performed to see whether or not a script was set for a particular language? - -sche(discuss)06:20, 21 February 2018 (UTC)[reply]
@-sche: Right, I was mainly just pointing to the page. It would save some memory to leave out {"Latn"} in the tables. I guess at least 96 bytes is used per instance (based on the World of Warcraft wiki explanation, ignoring Scribunto-specific stuff), if a local Latn variable is not being shared between the tables, which would come to a few hundred kilobytes if all the data modules are being transcluded. By contrast, it's cheap to check for the presence of the "scripts" item in a language's data table: you just check whether data_table.scripts is nil. I wonder if there are languages that need to have their script specified as None? I guess I can't see why. — Eru·tuon07:40, 21 February 2018 (UTC)[reply]
Thanks! Right now, while I'm just adding scripts to the modules, the list doesn't offer much advantage over just noticing which languages have no script set (unless it's of help to someone fulfilling the idea I suggested a few threads down for adding missing scriptS), my point is that it would be necessary (or at least, helpful) to save or subst: a copy prior to any switch to not declaring Latn at all and assuming that languages with no script specified can be assumed to be written in Latn (a fine assumption, but one we'll want to fix the edge cases of). (I've saved a copy now.) - -sche(discuss)15:04, 22 February 2018 (UTC)[reply]
From the perspective of the module, I guess there's probably no advantage to specifying "None" over assuming "Latn". But from the perspective of people trying to go through and ensure that languages with identifiable scripts have those scripts specified (most are Latin, but in a few cases the script has been Deva, or Ethi, or Thai), if we switch to specifying no script when the script is Latn, it would be good to know which languages have no script specified because the script is known to be Latn, vs which have no script specified because the script is not known. Perhaps this could be accomplished by first adding a commented-outscripts = {"None"} or script unknown to languages with no script specified, so the module doesn't have to spend any time processing that "script", but humans can still see while editing the module which languages we still need to track down script info for. - -sche(discuss)16:06, 21 February 2018 (UTC)[reply]
That's a good general principle, especially for a wiki that requires elapsed-time-consuming research. We need more allowance for work in process at a highly granular level. I don't really need to get a red Lua message for typing "g=f?, m". I need to have an acceptable entry to which I can come back when I have more information or an working on that class of problem. DCDuring (talk) 17:02, 21 February 2018 (UTC)[reply]
@Erutuon: Well, mine will require a script change as well. The transition for mine would also be fairly easy: change the accessors to check the positional params as well as the hashtable during the transition period, then remove the check in the hashtable after the transition is over. Could you possibly get some statistics on your page concerning how many languages don't have family params? Thanks! —*i̯óh₁n̥C[5]11:01, 20 February 2018 (UTC)[reply]
@JohnC5: I've added a table of the total number of languages and the number that has each data item (with notes on what the numerical indices represent). — Eru·tuon20:41, 20 February 2018 (UTC)[reply]
I am wondering if, at some point in the near future, we can all agree that the concept and execution of the languages module is just not going to work and try and come up with some novel solutions. The current process of making a change, breaking a bunch of things, then trying to scale back changes until nothing quite breaks is not what I would call an optimal design paradigm. If we want to persist in using the current solution, I the propose that we mandate any changes made will demonstrably *not* break content which is currently unbroken. - TheDaveRoss18:26, 20 February 2018 (UTC)[reply]
You're looking at novel solutions right above your comment. There's no optimal design paradigm possible when we don't have control over how it all works, and we don't even fully understand how memory is allocated. (And that's also why your mandate would not be feasible, because it's hard to demonstrate without trying it first.) —Μετάknowledgediscuss/deeds18:36, 20 February 2018 (UTC)[reply]
Tweaks to the existing design do not qualify in my book, moving from a large flat-file format which needs to be read in its entirety during every invocation to almost anything else would be a marked improvement. Perhaps restructuring the module so that it can read a small page specific to the language code rather than reading a large module with all language data. Perhaps figuring out how to migrate to Wikidata and leveraging an actual structured database. Perhaps something else entirely. - TheDaveRoss21:20, 20 February 2018 (UTC)[reply]
@TheDaveRoss: Yes, we could (and should) make better use of Wikidata. That's why I wanted to incorporate ids in our database. Things like language script data already exists in Wikidata. So in the long term our (reusable) data should be stored there, not in big lua chunks. – Jberkel02:07, 21 February 2018 (UTC)[reply]
@Vriullop: that's the case for this particular function call, since it loads all the data. However it's also possible to only query the fields needed which is much cheaper. – Jberkel10:51, 21 February 2018 (UTC)[reply]
Actually, perhaps that would be a great first step. If every language had its own data module it would reduce the amount read tremendously. Why does the data need to be in such large chunks? It would be easier to maintain if it were in discrete pages as well. A bot could probably generate all of the submodules in minutes, without a disruption in the existing structure. Then we would only have to update the data module lookup function and the rest should remain functional as is. - TheDaveRoss21:46, 20 February 2018 (UTC)[reply]
I suspect that having to load thousands of individual modules would not be a performance improvement over having to load a single module (or 26 modules as we do now). DTLHS (talk) 22:04, 20 February 2018 (UTC)[reply]
@TheDaveRoss: I'm not sure what you mean by "read in its entirety"; the first time mw.loadData is called on a data module, it creates a cached copy that is then used by later calls to mw.loadData. So a given data module is read only once on a page, provided it is always loaded with mw.loadData and not with require. I am curious what the memory difference would be if the data modules were split up.
There is a certain amount of overhead for each data module loaded with mw.loadData. If I'm reading the source code right, the data-wrapping function creates one table (seen) every time mw.loadData is called to map between the actual (cached) tables and the empty tables that are returned, and for each table in the data module it creates 2 tables (an empty table and the empty table's metatable) and 6 functions. Four of these functions, __index, __newindex, __pairs, __ipairs, are placed in the metatable of the virtual table and two (pairsfunc, ipairsfunc) are returned when pairs and ipairs are called on the empty table returned by mw.loadData. (Whew, it actually re-wraps the data every time the function is called, so these tables and functions are duplicated for every invocation! That's got to be a major contributor to our memory problems, because we load data modules so many times.)
Okay, so I guess the only item that would be duplicated if the data modules are split is the seen table. [Edit:] Actually, only the top level of a data module is wrapped. Subtables are wrapped only if they are visited by indexing. (For instance, mw.loadData("Module:languages/data2")["en"]["scripts"] wraps the top-level table, the English data table, and the English scripts table.) So if you iterate through a loaded data module that contains subtables, each of the subtables will be wrapped, and memory usage will be greater than if you load it without doing anything else. — Eru·tuon22:29, 20 February 2018 (UTC)[reply]
@Erutuon: Re performance, the reality is that we are up against an artificial performance problem, Wikimedia decided that 50mb of Lua memory usage would be the limit whether or not some other amount would be usable without compromising actual performance (e.g. page load time, server cost). The solution, until we start hitting other performance issues, can be as simple as minimizing the use of Lua memory in favor of resources which are less restricted (processor time). Splitting the data module into a per-code format would, I completely agree, increase the overhead in terms of function calls, but since most pages contain very few languages, I suspect that on average it would reduce overall server resource consumption. Since it is very hard for us to profile the things we do on wiki, we will be mostly stuck guessing about these types of things. (edit) Also, since not every invocation returns the same table in the current format, I am curious how MW decides to optimize. - TheDaveRoss13:13, 22 February 2018 (UTC)[reply]
Re "most pages contain very few languages": English lemmas with translations tables contain lots of languages, and the number of those is only going to increase as we become more and more complete. They are already the entries we're having trouble with. - -sche(discuss)20:25, 22 February 2018 (UTC)[reply]
@-sche: True. However currently every page with any invocations needs to read a large data file into memory, even if it only needs one language. There will be a tipping point somewhere when the average page needs to read a sufficiently large portion of the current module, but we are VERY far from that. - TheDaveRoss21:03, 22 February 2018 (UTC)[reply]
@TheDaveRoss: Actually, I've changed my mind; splitting up the language data modules is worth a try. It makes sense, because a given module typically uses only one or two language data tables. However, as there are 8031 language codes and there would be that many modules, it would probably be best to keep the current large modules for human editing and create a bot that would maintain the small modules. They would need to be protected and Module:documentation could display a message like "This module is generated from module x by a bot. Please edit module x instead of this one." (Heh, this would make the list of transclusions incredibly long. I wonder how many language codes are used on the pages with the most translations.) — Eru·tuon21:34, 22 February 2018 (UTC)[reply]
But would this actually help (m)any entries? We aren't having problems on entries that use only one language code, e.g. Evenki entries that never need to invoke any other language code besides Evenki, so we don't need to "fix" all those pages. We might see improvements on the few very language pages that are breaking now, but we'd be letting that tail wag the dog, in a way that would require much more upkeep (8000+ separate modules, possibly maintaining a bot to handle them,...). Our most complete pages, that transclude thousands of language codes, might still break. - -sche(discuss)22:30, 22 February 2018 (UTC)[reply]
@DTLHS: The first step is determining if it's worth it. If so, I might consider learning bot-writing just for this purpose.
@-sche: I don't know. Maybe loading one of several large modules many times is more costly than loading many small modules with the same data, or maybe not. There is probably a way to test this without creating 8000-plus modules. — Eru·tuon23:13, 22 February 2018 (UTC)[reply]
I was thinking of replacing the languages/dataX modules with something like languages/data/en and keeping the languages module exactly as it is. Once the module has been split into languages (perhaps by bot) it seems like it would be easier for humans to maintain the smaller, specific data files. They are easy to find (since they are just at their ISO code subpage) and they will be very small and simple. - TheDaveRoss13:47, 23 February 2018 (UTC)[reply]
The current system has the advantage that it's easier to quickly add data to a lot of lanuages, e.g. paging between Wikipedia, Ethnologue and one large lettered data module at a time, I've added script data to almost a thousand languages. It's also easier to watchlist and monitor changes to a few data modules. If we split it up, it'd seem like a step backwards, to when we had templates. We would seem to need to protect not only all existing subpages (/en, /fvr, /aav-ban-pro, etc), but all nonexistence subpages of valid form (/xx, /xxx, /xxx-xxx, /xxx-xxx-xxx) against being created by vandals, that could otherwise be created and would then AFAICT be accepted by the modules without complaint. And it doesn't seem like it would help that many pages. I'm not totally opposed to it, it just seems like it has a lot of drawbacks and not such great benefits. - -sche(discuss)15:07, 23 February 2018 (UTC)[reply]
If we didn't care that the language data modules were human readable, how much could we reduce the size? I'm thinking of something like a minifier that periodically "compiles" the human readable modules (what we have now) into something smaller. DTLHS (talk) 18:41, 20 February 2018 (UTC)[reply]
@DTLHS: One idea: concatenate all data into a string and provide another string with numerical data (printed in some non-decimal system) to indicate how to read the data. But I don't know exactly how to implement that or if it would really use less memory. — Eru·tuon21:12, 20 February 2018 (UTC)[reply]
Language modules seem to be used more intensively in the translation tables, but translations templates only need to know the script (and transliteration?), and probably other templates only need the script as well. Smaller modules with script data could be a good aproach. --Vriullop (talk) 10:38, 21 February 2018 (UTC)[reply]
@Vriullop: Ideally we would still store all the data in one place and have a mechanism to selectively load only the fields needed, sort of like a specialized view of the data. – Jberkel10:44, 21 February 2018 (UTC)[reply]
What is the intended format for cases where a lect does not, at the time it is added to the data module, have a Wikidata ID? (This could easily be the case for some of the more obscure lects we add exceptional codes for.) A blank "",? Use the old format where the canonical name and family are named parameters/fields? - -sche(discuss)19:28, 20 February 2018 (UTC)[reply]
Just a random idea: I noticed that Lua has weak tables which could be used to hold the language data. If more memory is needed some of it can be garbage collected (and later reloaded if necessary). The problem at the moment is that all language modules are loaded and never reclaimed. – Jberkel16:27, 21 February 2018 (UTC)[reply]
@Jberkel: Unfortunately, data modules that will be loaded with mw.loadData can't be weak, because you can't add metatables to them, and I don't know if the weakness of tables actually even affects Scribunto memory usage. — Eru·tuon20:20, 22 February 2018 (UTC)[reply]
@JohnC5: It might reduce memory to put scripts and other_names in indices 4 and 5. Those are the next most frequent items, in that order. However, going from 4 to 5 array items may enlarge the size of the array part of the table from 4 to 8; if so, leaving other_names in the hash part would be best. — Eru·tuon22:01, 22 February 2018 (UTC)[reply]
@Erutuon, Jberkel: So last night, while doing some other work, I found what I think is a more efficient and user-friendly way of doing this. I've created Module:languages/global which contains the names of all the fields in the language data ordered by frequency, all the standard diacritics, and the common scripts. We load this into all the language modules and use it as the one source of truth. So what is now:
It would turn out that for this case, that fields 1–6 will go into the array whereas 8–10 will go into the hashtable because [7] is omitted. However, we never iterate over these tables, and so the simplest tables will only have a few bytes worth of storage overhead. Then when you want to get something out, you do something like:
local g = mw.loadData("Module:languages/global")
…
local language_name = self.__data[g.canonical_name]
There will be a bit more lookup overhead, but it will always be O(1). This system also means that is one field becomes more common than another, all we need to do is change the order in Module:languages/global to rebalance the entire project. What do you think? —*i̯óh₁n̥C[5]22:52, 22 February 2018 (UTC)[reply]
@JohnC5: Lua Performance Tips mentions "If you write something like {[1] = true, [2] = true, [3] = true}, however, Lua is not smart enough to detect that the given expressions (literal numbers, in this case) describe array indices, so it creates a table with four slots in its hash part, wasting memory and CPU time." I'll have a look at the implementation, it's still not clear to me how it decides between array/hash parts. – Jberkel08:21, 23 February 2018 (UTC)[reply]
@Jberkel: I'm not sure why it says that since it's definitely not true. If you look at Module:User:JohnC5/Sandbox3, you can see that the first 3 elements which are inserted in the table under indices 1, 2, and 3 get printed out by ipairs, which only prints from the array. The object at index 5 gets put in the hashtable because it is non consecutive. Note also that the order in which the indices are entered is not relevant, as the compiler will still recognize that 2, 1, 3 is actually 1 to 3 consecutively. Perhaps those tips come from before Lua 5.1, when they suped up the constructor for the tables? Does this make sense? —*i̯óh₁n̥C[5]08:40, 23 February 2018 (UTC)[reply]
@Jberkel: Looking more carefully now that I've made some changes to my test module, the behavior is weirdly more robust than I expected. All the test I know for checking the size of the array (#a, ipairs(a), and table.getn(a)) point to my being correct, but I'm startled by these results. —*i̯óh₁n̥C[5]08:58, 23 February 2018 (UTC)[reply]
@Jberkel: I take it back. After some fiddling around with memory stuff, these functions are just clever, but they are not being put in the array. Lemme think on this for a bit. —*i̯óh₁n̥C[5]09:18, 23 February 2018 (UTC)[reply]
@JohnC5: It seems that the length operator first looks at the array part, then looks in the hash part. In the latter case, it finds the largest power of 2 i such that t[i] isn't nil, then does the search for an i less than that where t[i + 1] is nil and t[i] isn't. (So it returns the wrong result if a power of two is empty: x={[1]=true,[3]=true,[4]=true,[5]=true}assert(#x==1). table.getn does some other stuff that I don't understand, but if that fails, it calls the # operator. 21:48, 23 February 2018 (UTC)
@JohnC5, Jberkel: A way to use numerical indices would be to preprocess the data before outputting it: replacing string keys with numbers. "scripts" could be replaced with 4, "otherNames" with 5, and so on. Because the modules are loaded into memory once on a page, this processing would also be done only once. Unfortunately, it would confuse people that the exported table didn't match the table in the module (as would the previous idea). — Eru·tuon21:55, 14 March 2018 (UTC)[reply]
@Erutuon: Yes, I think this will be a maintenance nightmare. My call for profiling help on phabricator didn't go anywhere unfortunately. And setting up an instance to do profiling locally seems to be a lot of work. Ideally there would be a sandbox instance with extra debugging and profiling enabled. – Jberkel23:45, 14 March 2018 (UTC)[reply]
After using "Edit source" in translation section (trans-top template) and returning back from the editor by "Publish changes", all translation sections miss the [show ⏷] button on the right side to unfold them (and also the ± sign to edit the header).
It is necessary to refresh the page afterwards to get back to normal operation of the template.
With thanks and regards, Peter10:17, 20 February 2018 (UTC)[reply]
@Peter K. Livingston: I've experienced that as well, when using the AjaxEdit script. The "show" buttons and "±" sign are powered by JavaScript scripts, and I guess the scripts don't reload when "Publish changes" is pressed. Unfortunately I don't know how to fix this. — Eru·tuon21:13, 20 February 2018 (UTC)[reply]
I have had this problem for a couple of days where red links don't turn blue straight away when an entry is done, say for an inflection. I'm not sure whether it's just happening to me, or whether anyone else has noticed it. It can be rectified by doing a null edit, but this shouldn't be necessary. DonnanZ (talk) 19:25, 20 February 2018 (UTC)[reply]
@Wyang, Justinrleung, Suzukaze-c I just noticed page 鳺 / 𱉎. This Chinese character is obviously not a lemma (at least in Chinese). But this page is currently categorized into Translingual lemmas, Middle Chinese lemmas, Old Chinese lemmas, Chinese lemmas, and Mandarin lemmas. What should we do? Dokurrat (talk) 20:15, 20 February 2018 (UTC)[reply]
The lemma – non-lemma distinction is useless for Chinese, since there is no non-lemma form in Chinese by default. I think we should leave it as it is, since the "lemma" categories effectively function as a catch-all place for the words that one would find in a traditional dictionary, which is what 鳺 would belong to. Wyang (talk) 23:06, 20 February 2018 (UTC)[reply]
Is there any way I can pull our all the Arabic word entries in Wiktionary that contain etymological info, please? — This unsigned comment was added by Rdurkan (talk • contribs).
On MediaWiki_talk:Recentchangestext, there is a request to add a link to the Urdu version, but (a) the link is not of the same format as all the rest of the links (which use "foo:Special:Recentchanges" and rely on the site software to redirect to the local name of the page), and (b) I would imagine every wiki has a Recentchanges page, right? so I wonder if there are some criteria for deciding which languages to add interwiki links to, and whether Urdu meets those criteria. - -sche(discuss)05:37, 21 February 2018 (UTC)[reply]
Huh? I see no difference in the format of the link. As for your (b), I don't think we have any criteria, but it would be sensible to choose a cutoff of article count, and limit it to those wikis. —Μετάknowledgediscuss/deeds05:42, 21 February 2018 (UTC)[reply]
The request is to add [[ur:خاص:حالیہ تبدیلیاں]] (the Urdu-language name of the page), whereas the link to e.g. Arabic is not to [[خاص:أحدث_التغييرات]] but rather to [[ar:special:recentchanges|ar]] which then resolves to [[خاص:أحدث_التغييرات]]. - -sche(discuss)05:51, 21 February 2018 (UTC)[reply]
If we make the cutoff 10,000+ articles (since we already link to Arabic and Simple, and since that is the cutoff for the Main Page's sidebar links), we need to add quite a few more. I'll do that now, I suppose. I wonder if this is the kind of thing Wikidata wants to handle, the way they handle interwikis between different wikis' editions of Category:English nouns etc. - -sche(discuss)15:11, 22 February 2018 (UTC)[reply]
If anyone feels up to the task, it would be helpful if someone found every language which has no script specified in Module:languages, but which has entries (or even: which has translations in water), identify which scripts those entries/translations are in, and mass-add the scripts to Module:languages. - -sche(discuss)23:34, 21 February 2018 (UTC)[reply]
It would probably even be useful to simply add to the modules the scripts that all the languages we have entries for are de facto written in (meaning, the scripts our entries are in), not just ones that don't already have scripts specified. - -sche(discuss)15:07, 22 February 2018 (UTC)[reply]
Can someone add a "qN=" functionality to this template? Homophones are so often rooted in regional pronunciations, and I've seen some pretty bad workarounds and incomplete accent tagging due to the absence of this function. Or it could be "aN=" in keeping with {{a}}, or it could be like {{alter}}, but personally I find that template confusing. Ultimateria (talk) 12:04, 22 February 2018 (UTC)[reply]
As titled. There are some entries having usage examples with audios, for example Korean헤아리다(hearida). The audio can be displayed after the example in inline examples, and on a line under the example in multiline ones. Thanks!
I see. This isn't something I can help with, unfortunately. In any case, what browser are you using, and what version? I have no problem with Mozilla Firefox Quantum 58.0.2. — SGconlaw (talk) 22:46, 25 February 2018 (UTC)[reply]
@Wyang: It's displaying more or less the same way for me. Changing the inline CSS properties in the table tag that surrounds the audio player fixes it: vertical-align:bottom;display:inline;. The player is then centered on the bullet. (I used the developer tools to tinker with it. I'm in Firefox Quantum 59.) — Eru·tuon22:53, 25 February 2018 (UTC)[reply]
@Erutuon Thank you, that also makes it display better on mine. I'm using Chrome 64.0.3282.167. Although not completely level, the line above is visible at least: [2]. Wyang (talk) 23:04, 25 February 2018 (UTC)[reply]
This probably needs someone who can work with the javascript to solve the positioning issues and maybe make a slimmer player, before it can be added to {{ux}}. DTLHS (talk) 18:07, 25 February 2018 (UTC)[reply]
Russian translit - болого - g, not v - an exception in the exception
Can someone please add a new exception to Module:ru-translit, please? Please look for the line starting with -- handle Того, То́го (but not того or Того́, which have /v/)
In this phabricator task an admin requested deletion protection that was backed up by community consensus. The patch currently is on hold since if it were merged move protection would be enabled for the main page also. This deletion and move protection if implemented would block all users (including sysops) from moving or deleting the page. What are the community's thoughts on it? If the community is not ready to fully commit yet to this protection maybe enable it for a reasonable trial period (6 months or so) to see its effects?
We have done so before, but I think there are other ways to accomplish the same results without moving if the need arises in the future. - TheDaveRoss13:59, 26 February 2018 (UTC)[reply]
If there is appropriate consensus I can make a link to the discussion (or an admin/locally trusted user can post it) on the phab task. As soon as this is given the aim is to merge the change ASAP --Sau226 (talk) 17:11, 1 March 2018 (UTC)[reply]
That's a workaround, not a solution :) We really need to fix this properly. I've opened a ticket (T188492) to get some suggestions for better memory profiling. – Jberkel10:59, 28 February 2018 (UTC)[reply]
Once the team has feedback on design issues, bugs, and other things that might need worked out, the problems will be addressed and global preferences will be sent to the wikis.
@DTLHS, Suzukaze-c: Yes, Module:headword assumes that all forms share the same script as the headword. So in this case, the Cyrillic was being tagged as Mongolian. This probably saves some Lua resources because findBestScript doesn't have to be called on each form, but I don't know how much. Headword modules for languages that regularly use multiple scripts (Module:mn-headword, Module:sh-headword) supply the script for the alternative form. So in this case the solution would be {{head|bua|noun|tr=xüdöö ažaxy|Cyrillic|хүдөө ажахы|f1sc=Cyrl}}: automatic script detection for the headword, manually supplied script for the alternative spelling. — Eru·tuon01:14, 28 February 2018 (UTC)[reply]
Creation of a simple user page was blocked citing "various specific spammer habits." Suspect an over-zealous reaction to a single link to a page about my late wife. Also said "if I believe it constructive," I could resubmit. That message is wrong, because resubmitting only made the complaint stronger and removed the resubmit offer.
Having read the entire user page guidelines, I am persuaded my three paragraphs contain nothing prohibited and everything asked for. — This unsigned comment was added by 伟思礼 (talk • contribs).
It's an automated preventive measure against those weird people who believe Wiktionary user pages are an appropriate place to post ads. :/
If all links are prohibited, (1) the guidelines should say so, instead of "may describe your real-life activities and/or link to your own website"; and (2) the rejection should not invite a resubmission which will only be rejected again. I removed the link and the rest of it was allowed. 伟思礼 (talk) 06:51, 28 February 2018 (UTC)[reply]
@伟思礼: It is not the case that all links are prohibited, as you will see many pages contain links to a wide variety of places. The restriction on placing links is tied to the status of the account, with brand new accounts being restricted completely. It is certainly inconvenient for new editors, but it is a necessary evil to prevent spam bots from adding links all over the place.
The text of the message is, I agree, unhelpful, that is something we can do something about.
When I make an entry here, go to the bottom and click "publish," it adds a captcha to the bottom of the page, then scrolls to the top and adds "incomplete or missing captcha." Trying to publish an edit in other places adds the captcha to the top of the page. If a captcha is going to always be required, why not make it part of the page right away, instead of making us scroll to the bottom twice and click the same publish button twice? — This comment was unsigned.
I think captchas are only required before submitting edits if the edit contains an external link (either to an unexpected site, and/or prior to the editor making a certain number of edits? I'm not sure). I am also fairly certain that captchas are not something we as an individual wiki control (unlike the so-called "abuse filters" which stopped you from adding a link to your userpage). - -sche(discuss)20:53, 1 March 2018 (UTC)[reply]
I've been using wikis for years and years now, and I'm embarrassed to say that I'm unsure about how to ping people properly. I've gotten messages time and time again like, your ping didn't work, your ping didn't work. I'm not trying to sound like I'm ranting or something, but it's really annoying to have to keep hearing that (and I'm not annoyed at people themselves for telling me, I'm just annoyed that it keeps not working).
I'm not asking to tell me about how pings work (though it'd be nice). I'm just asking if there's some way that pinging can become easier; for instance, if the symbol @ is put before [[User:, then it should automatically ping in every situation. Something like that. Because the current way to do it I THINK is to put the ping right before your signature or on the same line as your signature or something like that.
I'm not into the technical stuff, so I don't know how much work implementing something like that would require, but I'm just asking if there's something we can implement in regards to this. PseudoSkull (talk) 00:05, 1 March 2018 (UTC)[reply]