> ... a translators’ and interpreters’ work is mostly about ensuring context, navigating ambiguity, and handling cultural sensitivity. This is what Google Translate cannot currently do.
Google Translate can't, but LLMs given enough context can. I've been testing and experimenting with LLMs extensively for translation between Japanese and English for more than two years, and, when properly prompted, they are really good. I say this as someone who worked for twenty years as a freelance translator of Japanese and who still does translation part-time.
Just yesterday, as it happens, I spent the day with Claude Code vibe-coding a multi-LLM system for translating between Japanese and English. You give it a text to be translated, and it asks you questions that it generates on the fly about the purpose of the translation and how you want it translated--literal or free, adapted to the target-language culture or not, with or without footnotes, etc. It then writes a prompt based on your answers, sends the text to models from OpenAI, Anthropic, and Google, creates a combined draft from the three translations, and then sends that draft back to the three models for several rounds of revision, checking, and polishing. I had time to run only a few tests on real texts before going to bed, but the results were really good--better than any model alone when I've tested them, much better than Google Translate, and as good as top-level professional human translation.
The situation is different with interpreting, especially in person. If that were how I made my living, I wouldn't be too worried yet. But for straight translation work where the translator's personality and individual identity aren't emphasized, it's becoming increasingly hard for humans to compete.
Alex-Programs 7 hours ago [-]
I've ended up doing a lot of research into LLM translation, because my language learning tool (https://nuenki.app) uses it a lot.
I built something kinda similar, and made it open source. It picks the top x models based on my research, translates with them, then has a final judge model critique, compare, and synthesise a combined best translation. You can try it at https://nuenki.app/translator if you're interested, and my data is at https://nuenki.app/blog
tanvach 2 hours ago [-]
Very neat, love how there’s a formality level selection! Google translate has such bad tendencies to use very formal language (at least when translating into Thai) that it’s almost useless in real life. Some English to Thai examples I tried so far have been quite natural.
tkgally 5 hours ago [-]
Very nice! Thanks for the links.
felipeerias 15 hours ago [-]
It is hard to convey just how important it is to be able to provide additional context, ask follow-up questions, and reason about the text.
I live in Japan. Almost every day I find myself asking things like “what does X mean in this specific setting?” or “how do I tell Y to that specific person via this specific medium?”.
Much of this can be further automated via custom instructions, so that e.g. the LMM knows that text in a particular language should be automatically translated and explained.
tkgally 15 hours ago [-]
> It is hard to convey just how important it is to be able to ... ask follow-up questions, and reason about the text.
Great ideas. I'll think about adding those features to the system in my next vibe-coding session.
What I automated in the MVP I vibe-coded yesterday could all be done alone by a human user with access to the LLMs, of course. The point of such an app would be to guide people who are not familiar with the issues and intricacies of translation so that they can get better translations for their purposes.
I have no intention to try to commercialize my app, as there would be no moat. Anyone who wanted to could feed this thread to Claude, ask it to write a starting prompt for Claude Code, and produce a similar system in probably less time than it took me.
aidenn0 11 hours ago [-]
I'm bad with names, so for any Japanese literature, I need to take notes; it's not unusual to see one character referred to by 3 names. Then you might have 3 characters that are all referred to as Tanaka-san at different points in time.
djaychela 46 minutes ago [-]
>creates a combined draft from the three translations
How is this part done? How are they chosen/combined to give the best results? Any info would be appreciated as I've seen this sort of thing mentioned before, but details have been scant!
boredhedgehog 6 hours ago [-]
What's your approach for dealing with a text too long for an ordinary context window? If I split it into chunks, each one needs some kind of summary of the previous ones for context, and I'm always unsure how detailed they should be.
tkgally 5 hours ago [-]
I haven’t developed an approach to it yet. In my tests yesterday, I did run into errors when the texts were too long for the context windows, but I haven’t tried to solve it yet.
As a human translator, if I were starting to translate a text in the middle and I wanted my translation to flow naturally with what had been translated before, I would want both a summary of the previous content and notes about how to handle specific names and terms and maybe about the writing style as well. When I start working on the project again tomorrow, I’ll see if Claude Code can come up with a solution along those lines.
jiehong 9 hours ago [-]
The problem with LLMs for translation is when they refuse to do so if the topic being translated isn’t following their policies, even if the context shows it’s fine here.
It can be as simple as discuss one’s own religion
Alex-Programs 7 hours ago [-]
I made a tool which translates sentences as you browse, for immersion[0]. I solved this by giving the model a code (specifically, "483") to return in any refusal. Then, if I detect that in the output, I fail over to another model+provider.
I also have a few heuristics (e.g. "I can't translate" in many different languages) to detect if it deviates from that.
You can just turn that off, at least on Googles model.
jedberg 2 hours ago [-]
> You give it a text to be translated ... and then sends that draft back to the three models for several rounds of revision, checking, and polishing.
Interesting. Curious if you modeled the cost of that single translation with the multiple LLM calls and how that compares to a human.
tkgally 2 hours ago [-]
I had Claude Code write a module that monitored the incoming and outcoming token counts and display the accumulated costs. A Japanese-to-English translation that yielded about a thousand words in English cost around US$0.40.
I didn't double-check the module's arithmetic, but it seems to have been in the ballpark, as my total API costs for OpenAI, Anthropic, and Google yesterday--when I was testing this system repeatedly--came to about eight dollars.
A human translator would charge many, many times more.
lukax 18 hours ago [-]
Try Soniox for real-time translation (interpreting). With the limited context it has in real-time, it's actually really good.
It's funny -- I independently implemented the same thing (without vibe coding) and found it doesn't actually work. When I ended up with was a game of telephone where errors were often introduced and propagated between the models.
The only thing that actually worked was knowing the target language and sitting down with multiple LLMs, going through the translation one sentence at a time with a translation memory tool wired in.
The LLMs are good, but they make lot of strange mistakes a human never would. Weird grammatical adherence to English structures, false friend mistakes that no one bilingual would make, and so on. Bizarrely many of these would not be caught between LLMs -- sometimes I would get _increasingly_ unnatural outputs instead of more natural outputs.
This is not just for English to Asian languages, even English to German or French... I shipped something to a German editor and he rewrote 50% of the lines.
LLMs are good editors and suggestors for alternatives, but I've found that if you can't actually read your target language to some degree, you're lost in the woods.
crazygringo 48 minutes ago [-]
That doesn't match my experience at all. Maybe it's something to do with what your prompts are asking for or the way you're passing translations? Or the size of chunks being translated?
I have been astounded at the sophistication of LLM translation, and haven't encountered a single false-friend example ever. Maybe it depends a lot on which models you're using? Or it thinks you're trying to have a conversation that code-switches mid-sentence, which is a thing LLM's can do if you want?
bboygravity 10 hours ago [-]
You just created the software for a profitable business. People would use that and pay for it.
bugtodiffer 9 hours ago [-]
but it is easy to build a competitor
yoden 3 hours ago [-]
Machine translation is a great example. It's also where I expect AI coding assistants to land. A useful tool, but not some magical thing that is going to completely replace actual professionals. We're at least one more drastic change away from that, and there's no guarantee anyone will find it any time soon. So there's not much sense in worrying about it.
A very similar story has been happening in radiology for the past decade or so. Tech folks think that small scale examples of super accurate AIs mean that radiologists will no longer be needed, but in practice the demand for imaging has grown while people have been scared to join the field. The efficiencies from AI haven't been enough to bridge the gap, resulting in a radiologist _shortage_.
throwaway2037 44 minutes ago [-]
> Machine translation is a great example. It's also where I expect AI coding assistants to land. A useful tool, but not some magical thing that is going to completely replace actual professionals.
I can say from experience that machine translation is light years ahead of 15 years ago. When I started studying Japanese 15 years ago, Google Translate (and any other free translator) was absolutely awful. It was so bad, that it struggled to translate basic sentences into reasonable native-level Japanese. Fast forward to today, it is stunning how good is Google Translate. From time to time, I even learn about newspeak (slang) from Google Translate. If I am writing a letter, I regularly use it to "fine-tune" my Japanese. To be clear: My Japanese is far from native-level, but I can put full, complex sentences into Google Translate (I recommend to view "both directions"), and I get a reasonable, native-sounding translation. I have tested the outputs with multiple native speakers and they agree: "It is imperfect, but excellent; the meaning is clear."
In the last few years, using only my primitive knowledge of Japanese (and Chinese -- which helps a lot with reading/writing), I have been able to fill out complex legal and tax documents using my knowledge of Japanese and the help of Google Translate. When I walk into a gov't office as the only non-Asian person, I still get a double take, but then they review my slightly-less-than-perfect submission, and proceed without issue. (Hat tip to all of the Japanese civil servants who have diligently served me over the years.)
Hot take: Except for contracts and other legal documents, "actual professionals" (translators) is a dead career at this point.
Alex-Programs 34 minutes ago [-]
Also, Google Translate is really not a particularly good translator. It has the most public knowledge, but as far as translators go it's pretty poor.
DeepL is a step up, and modern LLMs are even better. There's some data here[0], if you're curious - DeepL is beaten by 24B models, and dramatically beaten by Sonnet / Opus / https://nuenki.app/translator .
Just a note on the radiologist part, the current SOTA radiology AI is still tiny parameter CNN's from the mid-late 2010's running locally. NYT ran an article a few weeks about this, and the entire article uses the phrase "A.I.", which people assume means ChatGPT, but really can refer to anything in the last 60 years of A.I. research. Manually digging revealed it was an old architecture.
We don't know yet how a modern transformer trained on radiology would perform, but it almost certainly would be dramatically better.
demosthanos 2 hours ago [-]
> We don't know yet how a modern transformer trained on radiology would perform, but it almost certainly would be dramatically better.
Why? Is there something about radiology that makes the transformer architecture appropriate?
My understanding has been that transformers are great for sequences of tokens, but from what little I know of radiology sequence-of-tokens seems unlikely to be a useful representation of the data.
TechDebtDevin 2 hours ago [-]
That wouldnt be that much different than current CNN/labeling methods used on medical imaging. Last time I got a ct scan the paperwork had the workstation specs and the models/nueral network techniques used.
philsnow 18 hours ago [-]
The distinction between what people typically imagine a translator's job is and the reality reminds me of pixar movies being "localized" instead of just translated (green beans on a plate in the Japan release instead of broccoli because that's the food that Japanese kids don't like).
Lacking cultural context while reading translated texts is what made studying history finally interesting to me.
Second, the Pixar one is not "mere" translation; it is full localization because they changed the visual to match the "textual" change.
The Pokemon one is where the change was limited to the "text". The translator's heart might have been in the right place (it would depend on how integral to the story it is that the item is onigiri) but didn't have the authority to make the full breadth of changes needed for such adaptation to be successful.
archievillain 6 hours ago [-]
It has little to do with authority and more to do with the effort/return ratio. Visual edits are expensive and dialogue changes are cheap, so it doesn't make sense to redraw frames just for an irrelevant onigiri.
4Kids was very well known to visually change the japanese shows they imported if they thought it was worth it, mostly in the context of censorship. For example, all guns and cigarettes where removed from One Piece, turned into toy guns and lollipops instead.
The most infamous example, however, has got to be Yu-Gi-Oh!. Yu-Gi-Oh started as a horror-ish manga about a trickster god forcing people to play assorted games and cursing their souls when they inevitably failed to defeat him. The game-of-the-week format eventually solidified into the characters playing one single game, Duel Monsters (the Yu-Gi-Oh! TCG itself in the real world), and the horror-ish aspects faded away, although they still remained part of the show's aesthetic, based around Egyptian human sacrifices and oddly-card-game-obsessed ancient cults.
When the manga was adapted to the screen, it started directly with a softer tone[1], especially because the show was to be a vehicle for selling cards in the real world, not dissimilarly to Pokemon and MANY other anime from the era.
Nothing that happens in the show is particularly crude or shocking, it had that kind of soft edginess that fit well with its intended target audience (early teens). I imagine watching Bambi had to be much more traumatizing than anything in the show.
But that was still not enough for 4Kids, which had a pretty aggressive policy of no violence or death. Kind of problematic when the show's main shtick was "Comically evil villain puts our heroes in a contraption that will kill them if they don't win." (You can imagine the frequency these traps actually triggered neared zero).
To solve this, 4Kids invented the Shadow Realm. The show, thanks to its occultist theming, already had examples of people being cursed, or their souls being banished or captured. 4Kids solidified these vague elements into the shadow realm as a censorship scape-goat. Any reference to death was replaced with the shadow realm. Now, one might wonder why the censors thought that "hell-like dimension where your soul wanders aimlessly and/or gets tortured for eternity" was in any way less traumatizing than "you'll die", but I imagine it's because there was always the implication that people could be 'saved' from the shadow realm[2] by undoing the curse.
The Shadow Realm was a massive part of the western Yu-Gi-Oh mythos and even today it's a fairly common meme to say that somebody got "sent to the shadow realm", which makes it all funnier that it is not part of the original show.
A couple funny examples off the top of my head:
- Yugi must win a match while his legs are shackled. Two circular saws, one for him and one for the enemy, are present in the arena. They near the two competitors as they lose Life Points, with the loser destined to have their legs cut off.
In the 4Kids adaptation, the saws are visually edited to be glowing blue, and it's stated they're made out of dark energy that will send anybody that touches it to the shadow realm.
- A group of our heroes fight a group of villains atop of a skyscraper with a glass roof. In the original version, the villains state that the roof has been boobytrapped so that the losing side will explode, plunging the losers to their death by splattening.
In the 4Kids version, the boobytrap remained, but the visuals were edited to add a dark mist under the glass, with the villains stating that there's a portal under the roof that will send anybody that touches it to the shadow realm. This is made funnier when the villains lose and they're shown to have had parachutes with them all along, and they are NOT edited out.
[1] Technically speaking, there was a previous adaptation that followed the manga more closely and got only one season, generally referred to as Season 0.
[2] It does eventually happen in the anime that the heroes go in an alternate dimension to save somebody's cursed soul. Obviously, this dimension was directly identified as the Shadow Realm in the localization.
falcor84 4 hours ago [-]
Is that beans example a real thing? If so, I would hate for my kids to be subjected to that. The best thing about watching films from another country is that you're exposed to the culture of that foreign place and learn about how it's different from yours - I don't see why we'd try to localize away the human experience as if these differences don't exist.
dlisboa 1 hours ago [-]
Localization is more important than you might think.
My wife worked for a company that helped provide teaching content for schools throughout Brazil. They'd interview teachers all over the country and one of the complaints from teachers in isolated communities was that they had to use the same textbooks as other places in Brazil without any regard to their own situation.
They reported that many examples for starting math for kids featured things like "strawberries" or "apples", things the kids had never seen or maybe heard. So now they needed to abstract over what is a "fruit" and a "countable object" as well as whatever the example was trying to teach. Teachers reported less engagement and it was more work for them to adapt it to local relevance.
Try to teach kids about vegetables in the US midwest and use green beans and Bok Choy as an example, for instance. It doesn't make sense.
wil421 58 minutes ago [-]
They don’t have green beans in the Midwest?
We bought kids toys on Amazon and the fruits were strange. Not sure if they were Asian varieties or just made up.
infecto 3 hours ago [-]
I understand the sentiment and agree but for me there is a line and I am not sure Inside Out meets the threshold for exposing culture. The changes are too minor to me in a children’s film that it has little impact. Broccoli to peppers is not a deep enough change. Now if the film had a Chinese new year celebration in it and they switched it entirely to a western new year celebration I would think that is a pretty drastic change that does hide cultural changes.
the_af 4 hours ago [-]
Same. I don't understand why people want (them and their kids) to be isolated from other cultures. If you're watching a movie set in Japan, India or China, have it be about their culture. If there's something you or your kids don't understand, make an effort to learn about it (and the green peas thing seems trivial to understand).
Netflix also does something absurd with their subtitles. I was watching "The Empress" (which is set in the Austro-Hungarian Empire) with German language and English subtitles. I like listening to the real actors' voices, and learning the sounds and cadence of the language. So the characters speak in Italian for a while (subtitles say "[speaking in Italian]", and when they switch back to German the subtitles clarify.. "[in English]". The fuck, Netflix? Surely the viewer of this show understands they didn't speak in English in the Austro-Hungarian empire, so why write it's English? What the hell is Netflix even trying to achieve here? Seems robotic: "us = English speakers, therefore everyone's default must be English"?
abrugsch 2 hours ago [-]
Just saying "Pixar movies" was probably not a great example. They can be deliberately location ambiguous (Monsters Inc., Toy Story - though it's clearly _somewhere_ in America, The Incredibles - a generic "metropolis"/50's futuristic city, lightyear, Elemental) or very specifically somewhere (cars - mashup of Route 66 towns, Finding Nemo - Sydney when on land, Ratatouille - Paris, etc...)
It makes sense to "translate" locale cultural indicators in say Wall·E which was very location agnostic but not so much for say Turning red which is very culturally specific.
the_af 29 minutes ago [-]
Good point.
The localization of The Incredibles in Argentina was embarrassing, though someone must have thought it was a good idea. They used local voice actors popular at the time (though not so much today) with strong porteño (Buenos Aires') accents. They also referred to the streets with Argentinian names, e.g. "let's turn that corner of Corrientes Avenue!". The problem is that Corrientes Av is very typical of Buenos Aires, but nothing on screen looked anywhere close to it, so the whole thing was ridiculous and embarrassing, sort of like if the characters pretended they where in Tokyo and were Japanese.
What if they had gone the extra mile (maybe possible in the near future) and reskinned every character to look more Argentinian, and rethemed the city to look more like Buenos Aires, would I have been happier? Certainly not -- I want to see the original movie as intended, not some sanitized version designed to make me feel "at home" and not challenge me in the slightest.
(I watched the movie in English as well, mind you).
falcor84 3 hours ago [-]
Could that Netflix subtitle thing have been a one-off error? I don't think I've ever encountered such a mismatch before.
It did remind me of watching "The Beast" (La Bête)[0] in the original French with subtitles and I was then surprised when I saw the subtitles say "[In English]" and I was, "Oh, damn, the characters did actually switch to English".
It's consistent through The Empress, I just gave an example. But maybe it's a decision specifically for this show?
For example, when Elisabeth is practicing several languages, each is subtitled "[in $language]", but when she switches back to German the subtitles readily explain... "[in English]". This confused the hell out of me!
ktosobcy 2 hours ago [-]
Erm... yeah, cool and whatnot but:
1) in case of pixar those are just animations that are mostly "generic" (or in fairyland) hence adapting them to the local nuance to pass some idea makes sense
2) as a Pole that had to be basically "brainwashed" by US made movies - while being exposed to other cultures is great, being firehose-fed by "dream factory" was IMHO one of the worst thing that happened to "post-commie" countries.
sodality2 18 hours ago [-]
This article is spot on about a lot of things. One thing I think it fails to address is this:
> I feel confident in asserting that people who say this would not have hired a translator or learned Japanese in a world without Google Translate; they’d have either not gone to Japan at all, or gone anyway and been clueless foreigners as tourists are wont to do.
The correlation here would be something like: the people using AI to build apps previously would simply never have created an app, so it’s not affecting software development as a career as much as you first expect.
It would be like saying AI art won’t affect artists, because the people who would put in such little effort probably would never have commissioned anyone. Which may be a little true (at least in that it reduces the impact).
However, I don’t necessarily know if that’s true for software development. The ability to build software enabled huge business opportunities at very low costs. I think the key difference is this: the people who are now putting in such low effort into commissioning software maybe did hire software engineers before this, and that might throw off a lot of the numbers.
MarkusQ 18 hours ago [-]
Conversely, it may create jobs. Why? Because the more elephants you have in your parade, the more jobs there are for folks to walk behind them with a broom and bucket. For decades we've seen tools that "let users write their own software" and every one of them has driven up the demand for people to clean it up, make it scale, make it secure, or otherwise clean up the mess.
scuff3d 11 hours ago [-]
CAD, Matlab, and Altium made electrical and mechanical engineers more valuable, not less.
The work got easier, so what we do got more complex.
seventytwo 7 hours ago [-]
They’re all just tools. Use the tools or become obsolete.
catlifeonmars 4 hours ago [-]
Kind of a false dichotomy. A great example is debuggers vs print statements. Some people get by just fine with print statements, others lean heavily on debuggers. Another example is IDE vs plain vIM.
Becoming obsolete is a fear of people who are not willing or able to learn arbitrary problem domains in a short amount of time. In that case learning to use a particular tool will only get you so far. The real skill is being able to learn quickly (enthusiasm helps).
numpad0 3 hours ago [-]
So, "useless or dangerous tools" is not a self contradictory sentence.
Gas powered pogo sticks, shoe fitting X-ray, Radium flavored chocolates, Apollo LLTV, table saws, Flex Seal for joining two halves of boats together, exorbitantly parallelized x86 CPU, rackable Mac Pro with M1 SoC, image generation AI, etc.
Tools can be useless, or be even dangerous.
camillomiller 8 hours ago [-]
This has been a stable source of business for a while in my niche.
sodality2 17 hours ago [-]
Also true! But that world is one where the vast majority of time is spent cleaning up slop code, so if there's a general shift towards that, I think that still changes the job in a significant way. (I don't have extensive history in the industry yet so I may be wrong here)
MarkusQ 16 hours ago [-]
<tired old fart voice>
It's all cleaning up slop code. Always has been.
</tired old fart voice>
More optimistically, you can think of "user created code" as an attempt at a design document of sorts; they were trying to tell you (and the computer) what they wanted in "your language". And that dialog is the important thing.
ryandrake 15 hours ago [-]
Seriously. Unless you're one of the vanishingly rare few working with true Greenfield projects that start with an empty text file, you're basically cleaning up other developer's legacy slop.
ema 7 hours ago [-]
I mean even when I'm working on my own projects I'm cleaning up whatever code I wrote when I didn't yet know as much about the shape of the problem.
danielscrubs 7 hours ago [-]
We still don’t know what good code is. It is all contextual, and we can never decide what that context should be. We are influenced by what is hip today. Right now is static typing using Rust, tomorrow it might be energy usage with assembly, after that it might be Python for productiveness, after that C# for maintenance.
We can never decide, we just like learning, and there is little real, impactful research into programming as a business.
In two decades we will still collectively say ”we are learning so much”, ignoring that fact.
steveBK123 18 hours ago [-]
Google translate is a good example too in terms of better-than-nothing to the completely uninitiated, helpful to someone with a little knowledge, and obviously not a replacement for a professional. That is - the more you know, the more you see its failures.
I know enough Japanese to talk like a small child, make halting small talk in a taxi, and understand a dining menu / restaurant signage broadly. I also have been enough times to understand context where literal translation to English fails to convey the actual message.. for example in cases where they want to say no to a customer but can't literally say no.
I have found Google Translate to be similarly magical and dumb for 15 years of traveling to Japan without any huge improvements other than speed. The visual real-time image OCR stuff was an app they purchased (Magic Lens?) that I had previously used.
So who knows, maybe LLM coding stays in a similar pretty-good-never-perfect state for a decade.
sodality2 17 hours ago [-]
> So who knows, maybe LLM coding stays in a similar pretty-good-never-perfect state for a decade.
I think this is definitely a possibility, but I think the technology is still WAY too early to know that if the "second AI winter" the author references never comes, that we still wouldn't discover tons of other use cases that would change a lot.
The reasonable concern people have about AI eliminating coder jobs is that they will make existing coders drastically more productive. "Productivity" is literally defined as the number X of people required to do Y amount of stuff.
I'm not sure how seriously people take the threat of non-coding vibe-coders. Maybe they should! The most important and popular programming environment in the world is the spreadsheet. Before spreadsheets, everything that is today a spreadsheet was a program some programmer had to write.
simonw 17 hours ago [-]
I'm still optimistic that the net effect of making existing programmers drastically more productive is that our value goes up, because we can produce more value for other people.
dfxm12 16 hours ago [-]
The economy has taught us that when there is an excess of worker productivity, it leads to layoffs. It certainly does not lead to raises.
ryandrake 15 hours ago [-]
No software company I have ever worked at had an excess of worker productivity. There were always at least 3-5X as much work needing to be done, bugs needing to be fixed, features that needed to be implemented than engineers to do it. Backlogs just grew and grew until you just gave up and mass-closed issues because they were 10 years old.
If AI coding improves productivity, it might move us closer to having 2X as much work as we can possibly do instead of 3X.
SoftTalker 12 hours ago [-]
I don't think you can judge "work needing to be done" by looking at backlog. Tickets are easy to enter. If they were really important, they'd get done or people would be hired to do them (employed or contracted). 10 year old issues that never got attention were just never that important to begin with.
lmm 15 hours ago [-]
That sounds like the famous lump of labour fallacy. When something's cheaper people often spend more on it (Jevons paradox).
bgwalter 14 hours ago [-]
This "fallacy" is from 1891 and assumes jobs that require virtually no retraining. A farm worker could ion theory clean the factory floor or do one small step in an assembly line within a week.
Nowadays we already have bullshit jobs that keep academics employed. Retraining takes several years.
With "AI" the danger is theoretically limited because it creates more bureaucracy and reduces productivity. The problem is that it is used as an excuse for layoffs.
kasey_junk 16 hours ago [-]
Do you have a citation for that?
ohthatsnotright 15 hours ago [-]
What a strange thing to ask for a citation on when CEO pay, stock buy backs and corporate dividends are at all time highs while worker pay and honestly just affording to live continue to crater.
kasey_junk 15 hours ago [-]
Productivity is up and labor wages are up. That’s why I asked. It wasn’t an attempt at a rebuttal it was a request for reading material as it’s a heterodox view.
The normal conversation is that productivity growth has slowed and the divide has increased, not that more productivity creates lower outcomes in real terms.
It's economic jargon for what people are paid per hour for working (which can include non-direct payments such as healthcare and pensions), adjusted for inflation (for economists, "real" just means divided by CPI, as opposed to "nominal" which are the actual dollar amounts in the past).
I mean, I hate a lazy "citation needed" FUD attack as much as (really likely way more than) anyone, but with a bit more context I do think a citation is needed, as the correct citation in the other direction is (as someone else noted) Jevon's paradox: when you make it easier to X, you make it so people can use X in ever more contexts, and you make it so that the things which previously needed something way harder than X are suddenly possible, and the result -- as much in software development as any other field: it seems like every year it becomes MUCH easier to do things, due to better tools -- always seems to result in MORE demand, not less... we even see the slow raising of "table stakes" for software, such that a website or app is off-putting and lame to a lot of users if it isn't doing the things that require at least some effort: instead of animated transitions and giant images maybe the next phase of this is that the website has to be an interactive assistant white-glove AI experience--or some crazy AR-capable thing--requiring tons of engineering effort to pull off, but now possible for your average website due to AI coding. Meanwhile, the other effects you are talking about all started before AI coding even sort of worked well, and so have very little to do with AI: they are more related to monetary policy shifts, temporary pandemic demand spikes, and that R&D tax law change.
EZ-E 13 hours ago [-]
I rather think that LLMs help to write code faster, and also enables folks that would not program to do so in some capacity. In the end, you end up with more code in the world, and you end up needing more programmers to maintain/keep it running at scale.
visarga 3 hours ago [-]
LLMs don't care you have to maintain the code, they don't get any benefit or loss from their work and are unaccountable when they fuck up. They have no skin in the game.
They don't know the office politics, or go on coffee breaks with the team - humans still have more context and access. We still need people to manage the goals and risks, to constrain the AI in order to make it useful, and to navigate the physical and social world in their place.
danielscrubs 7 hours ago [-]
But when everyone started to produce SEO slop, the web died. It’s harder than ever to find truly passionate, single subject blogs from professionals for example.
The AI slop will make it harder for the small guys without marketing budget (some lucky few will still make it though). It will slowly kill the app ecosystem, untill all we reluctantly trust is FANG. The app pricing reflects it.
alganet 16 hours ago [-]
> everything that is today a spreadsheet was a program some programmer had to write
That is incorrect, sir.
First, because many problems were designed to fit into spreadsheets (human systems designed around a convenient tool). It is much more likely that several spreadsheets were _paper_ before, not custom written programs. For a lot of cases, that paper work was adapted directly to spreadsheets, no one did a custom program intermediate.
Second, because many problems we have today could be solved by simple spreadsheets, but they often aren't. Instead, people choose to hire developers instead, for a variety of reasons.
tptacek 16 hours ago [-]
I'm not sure we're really disagreeing about anything here. If you think spreadsheets didn't displace any programmers at all, that's contrary to my intuition, but not necessarily wrong --- especially because of the explosion of extrinsic demand for computer programming.
alganet 44 minutes ago [-]
You say spreadsheet software replace programmer.
I say spreadsheet software replace paper.
That's the disagreement. You have intuition, I have sources:
I think you're right, AI art and AI software dev are not analogous. The point of art is to create art. There are a lot of traditions and cultural expectations around this, and many of them depend on the artist involved. The human in the loop is important.
Meanwhile, the point of software development is not to write code. It's to get a working application that accomplishes a task. If this can be done, even at low quality, without hiring as many people, there is no more value to the human. In HN terms, there is no moat.
It's the difference between the transition from painting to photography and the transition from elevator operators to pushbuttons.
bgwalter 16 hours ago [-]
I'm currently getting two types of ads on YouTube: One is about how the official Israeli Gaza humanitarian efforts are totally fine and adequate (launched during the flotilla with Greta Thunberg).
The other is about an "AI" website generator, spamming every video at the start.
I wonder what kind of honest efforts would require that kind of marketing.
15123123 9 hours ago [-]
yeah I think the case for AI art is very different. I see major brands, even those who has been very generous with artist commission like McDonald Japan. is now using AI art instead.
imiric 6 hours ago [-]
> The correlation here would be something like: the people using AI to build apps previously would simply never have created an app, so it’s not affecting software development as a career as much as you first expect.
I don't think the original point or your interpretation is correct.
AI will not cause a loss of software development jobs. There will still be a demand for human developers to create software. The idea that non-technical managers and executives will do so with AI tools is as delusional as it was when BASIC, COBOL, SQL, NoCode, etc. were introduced.
AI will affect the industry in two ways, though.
First, by lowering the skill requirements to create software it creates a flood of vibe coders competing for junior-level positions. This dilutes the market value of competent programmers, and makes entering the software industry much more difficult.
A related issue is that vibe coders will never become programmers. They will have the ability to create and test software, which will improve as and if AI tools continue to improve, but they will never learn the skills to debug, troubleshoot, and fix issues by actually programming. This likely won't matter to them or anyone else, however, but it's good to keep in mind that theirs is a separate profession from programming.
Secondly, it floods the software market with shoddy software full of bugs and security issues. The quality average will go down causing frustration for users, and security holes will be exploited increasing the frequency of data leaks, privacy violations, and unquantifiable losses for companies. All this will likely lead to a rejection of AI and vibe coding, and an industry crash not unlike the video game one in 1983 or the dot-com one in 2000. This will happen at the bottom of the Trough of Disillusionment phase of the hype cycle.
This could play out differently if the AI tools reach a level of competence that exceeds human senior software engineers, and have super-human capabilities to troubleshoot, fix, and write bug-free software. In that case we would reach a state where AI could be self-improving, and the demand for human engineers would go down. But I'm highly skeptical that the current architecture of AI tools will be able to get us there.
candiddevmike 18 hours ago [-]
I think you could extrapolate it and say folks are primarily using GenAI for things they aren't considered a specialist in.
jezzamon 15 hours ago [-]
I was thinking about this comparison recently along a slightly different axis: One challenge when working with translations (human or machine translations) as that you can't vet whether it's correct or not yourself. So you just have to either trust the translation and hope it's the best. It's a lot easier to trust a person than a machine, though I have had someone message me once to say "this translation of your article is so bad I feel like the translator did not put in a serious attempt"
It's similar to vibe coding where the user truly doesn't know how to create the same thing themselves: You end up with output that you have no way of knowing it's correct, and you just have to blindly trust it and hope for the best. And that just doesn't work for many situations. So you still need expertise anyway (either yours or someone else's)
vasco 11 hours ago [-]
> So you just have to either trust the translation and hope it's the best. It's a lot easier to trust a person than a machine
Only because you think the machine will be incorrect. If I have a large multiplication to do it's much easier to trust the calculator than a mathematician, and that's all due to my perception of the calculator accuracy.
eurleif 13 hours ago [-]
You can kinda validate machine translation by doing a circular translation, A->B->A. It's not a perfect test, but it's a reasonably strong signal.
ruszki 12 hours ago [-]
I’m learning German, and I quite often use this test. Google Translate fails this most of the times. This is true even between Hungarian and English. The difference with the latter one is, that I can properly choose between the options given to me, if there are, without translating it back. Especially, that that passed many times when it gave terrible translations.
So this test fails many times, and even when not, the translation is still usually not good. However, when you don’t care about nuances, it’s still fine usually. And also translation seems to be better if the text is larger. Translating a single word does almost always fail. Translating a whole article is usually fine. But even there it matters what you translate. Translating the Austrian subreddit fails quite often. Sometimes completely for whole large posts. Translating news is better.
numpad0 11 hours ago [-]
Round tripping is definitely better than nothing, but it lets through a lot of "engrish" errors with multi-meaning words and with implied perspectives. Cape as in clothing and cape as in pointy seasides, general as in military rank vs general as in generic, etc.
e.g. "Set up database as: [read replica]" and "Database setup complete: [a replica was loaded.]" may translate into a same string in some language.
stevage 14 hours ago [-]
Well no, you can run the output. That gives you some measure of correctness.
danpalmer 14 hours ago [-]
No I think the original comment is correct. You need to be able to evaluate the result.
The goal of software is not to compile and run, it's to solve a problem for the user when running. If you don't know enough about the problem to be able to assess the output of the program then you can't trust that it's actually doing the right thing.
I've seen code generated by LLMs that runs, does something, and that the something would be plausible to a layperson, but that I know from domain knowledge and knowing the gotchas involved in programming that it's not actually correct.
I think the comparison with translation is a very apt one.
NicuCalcea 18 hours ago [-]
While it's just anecdotal evidence, I have translator friends and work has indeed been drying up over the past decade, and that has only accelerated with the introduction of LLMs. Just check any forum or facebook group for translators, it's all doom and gloom about AI. See this reddit thread, for example: https://www.reddit.com/r/TranslationStudies/comments/173okwg...
While professionals still produce much better quality translations, the demand for everything but the most sensitive work is nearly gone. Would you recommend your offspring get into the industry?
tossaway0 14 hours ago [-]
I imagine even for professional teams with clients that need proper translations, LLMs have made it so one person can do the work of many. The difference between the quality of former automatic translations and LLM translations is huge; you couldn't rely on auto translation at all.
Now a professional service only needs a reviewer to edit what the LLM produces. You can even ask the LLM to translate in certain tones, dialects, etc. and they do it very well.
neilv 17 hours ago [-]
Some additional things that translators do (which I recall from a professional translator friend, put in my own words):
* Idioms (The article mentions in passing that this isn't so much a difficulty in Norwegian->English, but of course idioms usually don't translate as sentences)
* Cultural references (From arts, history, cuisine, etc. You don't necessarily substitute, but you might have to hint if it has relevant connotations that would be missed.)
* Cultural values (What does "freedom" mean to this one nation, or "passion" to this other, or "resilience" to another, and does that influence translation)
* Matching actor in dubbing (Sometimes the translation you'd use for a line of a dialogue in a book doesn't fit the duration and speaking movements of an actor in a movie, so the translator changes the language to fit better.)
* Artful prose. (AFAICT, LLMs really can't touch this, unless they're directly plagiarizing the right artful bit)
krackers 18 hours ago [-]
This seems like a terrible comparison since Google Translate is completely beat by DeepL, let alone LLMs. (Google Translate almost surely doesn't use an LLM, or at least not a _large_ one given its speed)
Ninjinka 18 hours ago [-]
For Google's Cloud Translation API you can choose between the standard Neural Machine Translation (NMT) model or the "Translation LLM (Google's newest highest quality LLM-style translation model)".
That’s absolutely not the point of this article. The point was that people said, once Google Translate was introduced, that translators would lose their jobs. Just like people say the same thing about developers with LLMs nowadays. The point is: they didn’t, and they won’t.
DeepL is not part of that point. Yes, maybe eventually, developers will lose their jobs to something that is an evolution of LLMs. But that’s not relevant here.
resoluteteeth 4 hours ago [-]
> The point was that people said, once Google Translate was introduced, that translators would lose their jobs. Just like people say the same thing about developers with LLMs nowadays. The point is: they didn’t, and they won’t.
Translators are losing their jobs now though. Google translate wasn't very good for Japanese so a lot of people assumed that machine translation would never be a threat, but deepl was better to the point where a lot of translation moved to just cleaning up it's output and current state of the art llms as of the last six months are much better and can also be given context and other instructions to reduce the need for humans to clean up the output. When the dust settles translation as a job is probably going to be dead.
tiagod 2 hours ago [-]
I highly doubt LLMs will do a good job translating literature anytime soon.
ethan_smith 11 hours ago [-]
Google Translate has actually been using neural machine translation since 2016 and integrated PaLM 2 (a large language model) in 2023 for over 100 languages, though DeepL does still outperform it in many benchmarks.
>All this is not to say Google Translate is doing a bad job
Google Translate is doing a bad job.
The Chrome translate function regularly detects Traditional Chinese as Japanese. While many characters are shared, detecting the latter is trivial by comparing unicode code points - Chinese has no kana. The function used to detect this correctly, but it has regressed.
Most irritatingly of all, it doesn't even let you correct its mistakes: as is the rule for all kinds of modern software, the machine thinks it knows best.
simonw 17 hours ago [-]
That doesn't sound like a problem with Google Translate, it sounds like a problem with Google Chrome. I believe Chrome uses this small on-device model to detect the language before offering to translate it: https://github.com/google/cld3#readme
numpad0 11 hours ago [-]
IMO, it's still not too late and it'll never be too late to split and reorganize Unicode by languages - at least split Chinese and Japanese. LLMs seem to be having issues acquiring both Chinese and Japanese at the same time. It'll make sense for both languages.
The syntaxes aren't just different but generally backwards, and, it's just my hunch but, they sometimes sound like they are confused about which modifies word which.
jjani 9 hours ago [-]
It's only a matter of time before they have an LLM both 1. cheap 2. fast 3. good enough that they'll replace Google Translate's current model with it. I'd be very surprised if they'd put more than 1 hour of maintenance into Translate's current iteration over the last 12 months.
ryao 16 hours ago [-]
> At the dinner table a Norwegian is likely to say something like “Jeg vil ha potetene” (literally “I will have the potatoes”, which sounds presumptuous and haughty in English) where a brit might say “Could I please have some potatoes?”.
I find “I will have the potatoes” to be perfectly fine English and not haughty in the slightest. Is this a difference between British English and American English?
jvanderbot 15 hours ago [-]
When ordering, "I will have" sounds reasonable.
When asking someone to pass them to you, just imagine them turning to you, looking you in the eye, and asserting "I will have the potatoes" like it's some kind of ultimatum. Yes, that is strange.
roxolotl 13 hours ago [-]
It’s such an anachronistic statement I laughed out loud reading your comment. I even was taught that you can’t pass things mid air. You place the potatoes down between each person required to pass them to the person who wants them.
noobermin 15 hours ago [-]
Americans are extremely polite, and American English is replete with niceties in everyday speech. It's funny, having been a tourist abroad, it's only living in a significantly non American country for a few years that lead me to realise this. I even lived in another country albeit a former US colony for ten years and didn't even notice it there given American influence.
I stopped saying, stuff "I would like a latte today" or more Midwestern (could I get a latte today etc) in singapore because people would just get confused. Same with being too polite when recieving things. There's ways to be polite but it usually involves less words because anything else confuses people.
danpalmer 14 hours ago [-]
> Americans are extremely polite
Having grown up in the UK and living in Australia, I do not find Americans polite. To me, politeness is "please", "thank you", "may I have", etc, whereas "I would like a latte today" reads to me as a demand. In context it's fine (it stands out a bit but not in a problematic way), it's not particularly rude, but in general just stating your desires is not considered polite in my experience in UK/AU.
There are some other parts of American English that may be considered polite, I notice a lot of US folks addressing me as "sir" for example, but this sort of thing comes off as insincere or even sarcastic/rude.
I know this is how people communicate so they don't really bother me, I'm used to them and accept that different people have different expectations. I also understand that Americans might believe they are being polite and consider me to be rude, but I think this is why blanket statements like "Americans are extremely polite" are just missing so much cultural nuance.
lelanthran 3 hours ago [-]
> folks addressing me as "sir" for example, but this sort of thing comes off as insincere or even sarcastic/rude.
That's news to me! In my part of the world I call even the janitorial staff "Sir". I'm not aware of anyone ever thinking that was rude.
einarfd 3 hours ago [-]
When people call me Sir, I find it uncomfortable.
I know it is a thing in English and specifically in the USA.
But it is so far from the Norwegian cultural norms I grew up with, that I'll probably never get used to it.
not_a_bot_4sho 13 hours ago [-]
America is not a monolith. Spend a day in New York and a day in Seattle. Language is the same but politeness carries widely.
danpalmer 12 hours ago [-]
I agree. The parent comment was treating America as a monolith, and my point is that there are many different contexts around the world that will interpret the described use of language in a very different way, and often not read it as polite.
ssl-3 5 hours ago [-]
As native-born American who has always lived in a Midwestern part of the States where visiting people often remark upon our unusual politeness:
I've never ordered a latte "today".
A simple statement of "I'd like a latte" fits fine in our regional vernacular.
I think that "I'd like a latte today" would appear a weird bit superfluous of specificity, unless it comes from a regular and familiar customer who might normally order something different and/or random.
I mean: "Today"? As opposed to later this week or something? It's implicit that the latte is wanted as soon as they can get around to it. (Unless yesterday's order was something different, and then adding "today" may make sense.)
---
But English is fucked up, and I'm getting old, so I've spent a lot of time observing (and sometimes producing) slight miscommunication.
In terms of things like ordering food or drink at a counter, it still works fine as long as nobody gets grumpy, and the desired item is produced.
I'm reminded of a local festival I went to as a little kid, with my parents, sometime. I was getting hungry and they were asking me what I wanted to eat. There were a lot of pop-up food vendor there, mostly with tables and stuff instead of the now-ubiquitous "food truck."
In the corner was a gyro stand with amazing-looking racks of spinny-meat. I wanted to try whatever that was.
The big banner said "GYROS" and we got in line.
Discussions were had between my parents about this "GYROS" concept, which they'd never seen before either. Was it a singular, or a plural? How many "GYROS" might a boy reasonable want? How was it pronounced? It looks like "gyro" as in "gyrocopter" but it seemed to them that it must be Greek, so they went through some variations on pronunciation as the line moved forward.
As we got closer, some of these questions were answered: The sign definitely referred to a plural offering, and seeing people leave it became clear that [unlike things like chorros or tacos or donuts] one was plenty.
And when we got to the front, the conversation went like this:
Parents: "Our son wants one of these... but we're not sure how to say it. Jye-roe? Hee-roe?"
"They're just gyros," he replied to them to them dismissively in a thick Greek accent, and motioned for his staff to produce 1 gyro.
And then the man looked at me, with his skin sweaty on that hot sunny day and a thick gold chain around his neck, and said to me in his very best and most careful English something to me that I can never forget. "I call them gyros. But listen to me, you can call them whatever you want. Jye-roe? Hee-roe? Yee-roe? Sandwich? Whatever you say, and however you say it: If you get what you expect, then you said it right. Alright?"
My trepidatious nods made sure that he was understood, and the awesome fucking sandwich-taco I had that day changed my entire perspective on food -- and ordering food -- forever.
So, sure: Ordering "one latte today"?
It sounds weird to me, but if it is understood and you get what you want, then it works. Correctness? Politeness? Whatever. Despite seeming weird: It works.
(Up next: There's a lot of ways to mispronounce approximately everything related to ordering pho, and none of them are wrong.)
strken 4 hours ago [-]
As an Australian, I'd modify my order with "today" if I'd asked for a flat white or a piccolo the last time I was in the shop. It would be a way to highlight that my order has changed, and said to a barista who knows me and is likely running on autopilot.
rr808 15 hours ago [-]
Maybe its a NY thing I was shocked when I heard people in restaurants saying "I want...". Growing up outside the US "want" is a very impolite word. People in US are polite but direct, usually English/Irish people are much less direct.
jjani 9 hours ago [-]
To give you an even more shocking one; in Korean, known for its various formality and politeness levels, the standard form when ordering is saying "Give me X"!
ryao 14 hours ago [-]
Coincidentally, I am in NY. That said, “I want…” when ordering seems fine to me too.
sudahtigabulan 15 hours ago [-]
> could I get a latte today actually
To me (non-American) the above sounds like sarcasm, not politeness. Adding "today" and/or "actually" could mean you've had it with their delays.
I like to joke that Americans always seem to find ways to get offended by innocuous things, but in this case the joke is on me.
bluefirebrand 11 hours ago [-]
Tone of voice and mannerism matters here a lot
To me (Canadian, not American) "Could I get a latte today actually" sounded something like "Normally I get something other than a latter but actually today I would like a latte instead"
Not rude at all, but kind of assumes some context
zzo38computer 11 hours ago [-]
I would think that it depends on the context.
To me it seems (without any context) that it might mean that you changed your mind about what day you wanted it. This does not seem to make sense in many contexts, though.
ryao 9 hours ago [-]
It makes sense if you are a regular and today, you want something other than your usual.
3eb7988a1663 11 hours ago [-]
American - I fully agree with your interpretation. Throwing on the time component gives up all pretense of being polite.
8 hours ago [-]
SoftTalker 12 hours ago [-]
It would also depend on the tone of voice and even body language used.
dr_dshiv 19 hours ago [-]
In my limited experience, LLMs can have issues with translation tone — but these issues are pretty easily fixed with good prompting.
I want to believe there will be even more translators in the future. I really want to believe it.
alganet 17 hours ago [-]
> easily fixed with good prompting
Can you give us an example of a typical translation question and the "good prompting" required to make the LLM consider tone?
Good, but not exactly what I expected for "easily fixed".
It includes a lot of steps and constant human evaluation between them, which implies that decisions about tone are ultimately made by whoever is prompting the LLM, not the LLMs themselves.
> "If they are generally in the style I want..."
> "choosing the sentences and paragraphs I like most from each..."
> "I also make my own adjustments to the translation as I see fit..."
> "I don’t adopt most of the LLM’s suggestions..."
> "I check it paragraph by paragraph..."
It seems like a great workflow to speed up the work of an already experienced translator, but far from being usable by a layman due to the several steps requiring specialized human supervision.
Consider the scenario presented by the blog post regarding bluntness/politeness and cultural sensitivities. Would anyone be able to use this workflow without knowing that beforehand? If you think about it, it could make the tone even worse.
nottorp 5 hours ago [-]
> It seems like a great workflow to speed up the work of an already experienced translator, but far from being usable by a layman due to the several steps requiring specialized human supervision.
Just like programming. And anything else "AI" assisted.
Helps experts type less.
alganet 3 hours ago [-]
Most of my time as a programmer is not spent typing.
darvinyraghpath 19 hours ago [-]
Fascinating thought piece. While I agree with the thrust of the piece: 'that llms can't really replace engineers', unfortunately the way the industry works is that the excuse of AI, however grounded in reality has been repurposed as a cudgel against actual software industry workers. Sure eventually everyone might figure out that AI can't really write code by itself - and software quality will degrade..
But unfortunately we've long been on the path of enshitification and I fear the trend will only continue.
If google's war against its own engineers has resulted in shittier software - and things start break twice a year instead of once - would anyone really blink twice?
tartoran 18 hours ago [-]
Maybe AI can't replace engineers but it surely can apply downward pressure on engineers' salaries.
lodovic 11 hours ago [-]
I don't believe that. Software has become so expensive in the last decade, that only very large enterprises and venture capitalists were still building custom applications (ymmv). LLMs make it cheaper and faster to create software - you don't need these large teams and managers anymore, just a few developers. Smaller companies will be back in the game.
bluefirebrand 11 hours ago [-]
> Smaller companies will be back in the game.
Smaller companies that can afford smaller salaries you mean?
I'm not sure that is at odds with what the post you were replying to said
17 hours ago [-]
camillomiller 8 hours ago [-]
>> For what it’s worth, I don’t think it’s inconceivable that some future form of AI could handle context and ambiguity as well as humans do, but I do think we’re at least one more AI winter away from that, especially considering that today’s AI moguls seem to have no capacity for nuance, and care more about their tools appearing slick and frictionless than providing responsible output.
Fantastic closing paragraph! Loved the article
carlosjobim 18 hours ago [-]
As long as the person you are talking or writing to is aware that you're not a native speaker, they will understand that you won't be able to follow conventions around polite languages or understand subtle nuances on their part. It's really a non issue. The finer clues of language are intended for people who are from the same culture.
Kiro 10 hours ago [-]
> I see claims from one side that “I used $LLM_SERVICE_PROVIDER to make a small throwaway tool, so all programmers will be unemployed in $ARBITRARY_TIME_WINDOW”, and from the other side flat-out rejections of the idea that this type of tool can have any utility.
No, the one side is saying that all their code is written by LLMs already and that's why they think that. In fact, I would say the other side is the former ("it works for throwaway code but that's it") and that no-one is flat-out rejecting it.
banq 10 hours ago [-]
LLM=Google Translate +Context
Seanambers 6 hours ago [-]
Prompting and context solves this.
p3rls 2 hours ago [-]
I just posted that Google is translating words that should be translated like "bastard" in foreign pop music (in this case, a member of the most popular boy band in the world, BTS) as racial slurs like the n-word. This is pretty much the worst case error scenario for a translation service.
It's amazing more people aren't talking about this stuff, how incompetent do you have to be to allow racial slurs in translation, especially with all the weights towards being pro-diversity etc that already exist?
Just more enshittification from ol Sundar and the crew
DidYaWipe 9 hours ago [-]
Hopefully it tells everyone to never use this douchey term again.
autobodie 11 hours ago [-]
"Behold the impeccable nuance of my opinion"
yieldcrv 16 hours ago [-]
This is like the worst comparison since generative AI is far better at conversational translation than google translate
LLM’s will tell you idioms, slang, and the point behind it
You can take a screenshot of telegram channels for both sides of a war conflict and get all context in a minute
In classic HN fashion I’m sure I missed the point, ok translators are still in demand got it.
Google Translate has been leapfrogged by the same thing that allows for “vibecoding”
noname120 16 hours ago [-]
The article starts with a giant straw man and miscaracterisation, not sure that I want to read the rest of the article at this point
BergAndCo 13 hours ago [-]
Literally the first example is an outright lie.
> At the dinner table a Norwegian is likely to say something like “Jeg vil ha potetene” (literally “I will have the potatoes”, which sounds presumptuous and haughty in English). Google Translate just gives the blunt direct translation.
"To will" means "it is my will", i.e. to want to, which became the future tense in English. In Norwegian is still means "want", and Google Translate indeed translates it as "I want the potatoes." If you translate the rising (pleading) intonation on "potatoes", you then have an unwritten "please?", i.e. "I want the potatoes?...", which is passable English.
Most businesses think AI code is "good enough", and that machine translation is "good enough", which tanks the entire industry because there is now more supply than demand. He says there are still plenty of translator jobs, but then justifies it as because "it’s inadvisable to subsititute (sic) Google Translate for an interpreter at a court hearing." Meaning, the thing taking away all the tech jobs temporarily (unchecked mass-migration) is the same thing keeping keeping him employed temporarily.
Google Translate can't, but LLMs given enough context can. I've been testing and experimenting with LLMs extensively for translation between Japanese and English for more than two years, and, when properly prompted, they are really good. I say this as someone who worked for twenty years as a freelance translator of Japanese and who still does translation part-time.
Just yesterday, as it happens, I spent the day with Claude Code vibe-coding a multi-LLM system for translating between Japanese and English. You give it a text to be translated, and it asks you questions that it generates on the fly about the purpose of the translation and how you want it translated--literal or free, adapted to the target-language culture or not, with or without footnotes, etc. It then writes a prompt based on your answers, sends the text to models from OpenAI, Anthropic, and Google, creates a combined draft from the three translations, and then sends that draft back to the three models for several rounds of revision, checking, and polishing. I had time to run only a few tests on real texts before going to bed, but the results were really good--better than any model alone when I've tested them, much better than Google Translate, and as good as top-level professional human translation.
The situation is different with interpreting, especially in person. If that were how I made my living, I wouldn't be too worried yet. But for straight translation work where the translator's personality and individual identity aren't emphasized, it's becoming increasingly hard for humans to compete.
I built something kinda similar, and made it open source. It picks the top x models based on my research, translates with them, then has a final judge model critique, compare, and synthesise a combined best translation. You can try it at https://nuenki.app/translator if you're interested, and my data is at https://nuenki.app/blog
I live in Japan. Almost every day I find myself asking things like “what does X mean in this specific setting?” or “how do I tell Y to that specific person via this specific medium?”.
Much of this can be further automated via custom instructions, so that e.g. the LMM knows that text in a particular language should be automatically translated and explained.
Great ideas. I'll think about adding those features to the system in my next vibe-coding session.
What I automated in the MVP I vibe-coded yesterday could all be done alone by a human user with access to the LLMs, of course. The point of such an app would be to guide people who are not familiar with the issues and intricacies of translation so that they can get better translations for their purposes.
I have no intention to try to commercialize my app, as there would be no moat. Anyone who wanted to could feed this thread to Claude, ask it to write a starting prompt for Claude Code, and produce a similar system in probably less time than it took me.
How is this part done? How are they chosen/combined to give the best results? Any info would be appreciated as I've seen this sort of thing mentioned before, but details have been scant!
As a human translator, if I were starting to translate a text in the middle and I wanted my translation to flow naturally with what had been translated before, I would want both a summary of the previous content and notes about how to handle specific names and terms and maybe about the writing style as well. When I start working on the project again tomorrow, I’ll see if Claude Code can come up with a solution along those lines.
It can be as simple as discuss one’s own religion
I also have a few heuristics (e.g. "I can't translate" in many different languages) to detect if it deviates from that.
It works pretty well.
[0] https://nuenki.app
Interesting. Curious if you modeled the cost of that single translation with the multiple LLM calls and how that compares to a human.
I didn't double-check the module's arithmetic, but it seems to have been in the ballpark, as my total API costs for OpenAI, Anthropic, and Google yesterday--when I was testing this system repeatedly--came to about eight dollars.
A human translator would charge many, many times more.
https://soniox.com
Disclaimer: I work for Soniox.
The only thing that actually worked was knowing the target language and sitting down with multiple LLMs, going through the translation one sentence at a time with a translation memory tool wired in.
The LLMs are good, but they make lot of strange mistakes a human never would. Weird grammatical adherence to English structures, false friend mistakes that no one bilingual would make, and so on. Bizarrely many of these would not be caught between LLMs -- sometimes I would get _increasingly_ unnatural outputs instead of more natural outputs.
This is not just for English to Asian languages, even English to German or French... I shipped something to a German editor and he rewrote 50% of the lines.
LLMs are good editors and suggestors for alternatives, but I've found that if you can't actually read your target language to some degree, you're lost in the woods.
I have been astounded at the sophistication of LLM translation, and haven't encountered a single false-friend example ever. Maybe it depends a lot on which models you're using? Or it thinks you're trying to have a conversation that code-switches mid-sentence, which is a thing LLM's can do if you want?
A very similar story has been happening in radiology for the past decade or so. Tech folks think that small scale examples of super accurate AIs mean that radiologists will no longer be needed, but in practice the demand for imaging has grown while people have been scared to join the field. The efficiencies from AI haven't been enough to bridge the gap, resulting in a radiologist _shortage_.
In the last few years, using only my primitive knowledge of Japanese (and Chinese -- which helps a lot with reading/writing), I have been able to fill out complex legal and tax documents using my knowledge of Japanese and the help of Google Translate. When I walk into a gov't office as the only non-Asian person, I still get a double take, but then they review my slightly-less-than-perfect submission, and proceed without issue. (Hat tip to all of the Japanese civil servants who have diligently served me over the years.)
Hot take: Except for contracts and other legal documents, "actual professionals" (translators) is a dead career at this point.
DeepL is a step up, and modern LLMs are even better. There's some data here[0], if you're curious - DeepL is beaten by 24B models, and dramatically beaten by Sonnet / Opus / https://nuenki.app/translator .
[0] https://nuenki.app/blog/claude_4_is_good_at_translation_but_... - my own blog
We don't know yet how a modern transformer trained on radiology would perform, but it almost certainly would be dramatically better.
Why? Is there something about radiology that makes the transformer architecture appropriate?
My understanding has been that transformers are great for sequences of tokens, but from what little I know of radiology sequence-of-tokens seems unlikely to be a useful representation of the data.
Lacking cultural context while reading translated texts is what made studying history finally interesting to me.
First, the Pixar thing was green pepper, not green beans: https://www.businessinsider.com/why-inside-out-has-different...
Second, the Pixar one is not "mere" translation; it is full localization because they changed the visual to match the "textual" change.
The Pokemon one is where the change was limited to the "text". The translator's heart might have been in the right place (it would depend on how integral to the story it is that the item is onigiri) but didn't have the authority to make the full breadth of changes needed for such adaptation to be successful.
4Kids was very well known to visually change the japanese shows they imported if they thought it was worth it, mostly in the context of censorship. For example, all guns and cigarettes where removed from One Piece, turned into toy guns and lollipops instead.
The most infamous example, however, has got to be Yu-Gi-Oh!. Yu-Gi-Oh started as a horror-ish manga about a trickster god forcing people to play assorted games and cursing their souls when they inevitably failed to defeat him. The game-of-the-week format eventually solidified into the characters playing one single game, Duel Monsters (the Yu-Gi-Oh! TCG itself in the real world), and the horror-ish aspects faded away, although they still remained part of the show's aesthetic, based around Egyptian human sacrifices and oddly-card-game-obsessed ancient cults.
When the manga was adapted to the screen, it started directly with a softer tone[1], especially because the show was to be a vehicle for selling cards in the real world, not dissimilarly to Pokemon and MANY other anime from the era.
Nothing that happens in the show is particularly crude or shocking, it had that kind of soft edginess that fit well with its intended target audience (early teens). I imagine watching Bambi had to be much more traumatizing than anything in the show.
But that was still not enough for 4Kids, which had a pretty aggressive policy of no violence or death. Kind of problematic when the show's main shtick was "Comically evil villain puts our heroes in a contraption that will kill them if they don't win." (You can imagine the frequency these traps actually triggered neared zero).
To solve this, 4Kids invented the Shadow Realm. The show, thanks to its occultist theming, already had examples of people being cursed, or their souls being banished or captured. 4Kids solidified these vague elements into the shadow realm as a censorship scape-goat. Any reference to death was replaced with the shadow realm. Now, one might wonder why the censors thought that "hell-like dimension where your soul wanders aimlessly and/or gets tortured for eternity" was in any way less traumatizing than "you'll die", but I imagine it's because there was always the implication that people could be 'saved' from the shadow realm[2] by undoing the curse.
The Shadow Realm was a massive part of the western Yu-Gi-Oh mythos and even today it's a fairly common meme to say that somebody got "sent to the shadow realm", which makes it all funnier that it is not part of the original show.
A couple funny examples off the top of my head: - Yugi must win a match while his legs are shackled. Two circular saws, one for him and one for the enemy, are present in the arena. They near the two competitors as they lose Life Points, with the loser destined to have their legs cut off.
In the 4Kids adaptation, the saws are visually edited to be glowing blue, and it's stated they're made out of dark energy that will send anybody that touches it to the shadow realm.
- A group of our heroes fight a group of villains atop of a skyscraper with a glass roof. In the original version, the villains state that the roof has been boobytrapped so that the losing side will explode, plunging the losers to their death by splattening.
In the 4Kids version, the boobytrap remained, but the visuals were edited to add a dark mist under the glass, with the villains stating that there's a portal under the roof that will send anybody that touches it to the shadow realm. This is made funnier when the villains lose and they're shown to have had parachutes with them all along, and they are NOT edited out.
[1] Technically speaking, there was a previous adaptation that followed the manga more closely and got only one season, generally referred to as Season 0.
[2] It does eventually happen in the anime that the heroes go in an alternate dimension to save somebody's cursed soul. Obviously, this dimension was directly identified as the Shadow Realm in the localization.
My wife worked for a company that helped provide teaching content for schools throughout Brazil. They'd interview teachers all over the country and one of the complaints from teachers in isolated communities was that they had to use the same textbooks as other places in Brazil without any regard to their own situation.
They reported that many examples for starting math for kids featured things like "strawberries" or "apples", things the kids had never seen or maybe heard. So now they needed to abstract over what is a "fruit" and a "countable object" as well as whatever the example was trying to teach. Teachers reported less engagement and it was more work for them to adapt it to local relevance.
Try to teach kids about vegetables in the US midwest and use green beans and Bok Choy as an example, for instance. It doesn't make sense.
We bought kids toys on Amazon and the fruits were strange. Not sure if they were Asian varieties or just made up.
Netflix also does something absurd with their subtitles. I was watching "The Empress" (which is set in the Austro-Hungarian Empire) with German language and English subtitles. I like listening to the real actors' voices, and learning the sounds and cadence of the language. So the characters speak in Italian for a while (subtitles say "[speaking in Italian]", and when they switch back to German the subtitles clarify.. "[in English]". The fuck, Netflix? Surely the viewer of this show understands they didn't speak in English in the Austro-Hungarian empire, so why write it's English? What the hell is Netflix even trying to achieve here? Seems robotic: "us = English speakers, therefore everyone's default must be English"?
It makes sense to "translate" locale cultural indicators in say Wall·E which was very location agnostic but not so much for say Turning red which is very culturally specific.
The localization of The Incredibles in Argentina was embarrassing, though someone must have thought it was a good idea. They used local voice actors popular at the time (though not so much today) with strong porteño (Buenos Aires') accents. They also referred to the streets with Argentinian names, e.g. "let's turn that corner of Corrientes Avenue!". The problem is that Corrientes Av is very typical of Buenos Aires, but nothing on screen looked anywhere close to it, so the whole thing was ridiculous and embarrassing, sort of like if the characters pretended they where in Tokyo and were Japanese.
What if they had gone the extra mile (maybe possible in the near future) and reskinned every character to look more Argentinian, and rethemed the city to look more like Buenos Aires, would I have been happier? Certainly not -- I want to see the original movie as intended, not some sanitized version designed to make me feel "at home" and not challenge me in the slightest.
(I watched the movie in English as well, mind you).
It did remind me of watching "The Beast" (La Bête)[0] in the original French with subtitles and I was then surprised when I saw the subtitles say "[In English]" and I was, "Oh, damn, the characters did actually switch to English".
[0] https://en.wikipedia.org/wiki/The_Beast_(2023_film)
For example, when Elisabeth is practicing several languages, each is subtitled "[in $language]", but when she switches back to German the subtitles readily explain... "[in English]". This confused the hell out of me!
> I feel confident in asserting that people who say this would not have hired a translator or learned Japanese in a world without Google Translate; they’d have either not gone to Japan at all, or gone anyway and been clueless foreigners as tourists are wont to do.
The correlation here would be something like: the people using AI to build apps previously would simply never have created an app, so it’s not affecting software development as a career as much as you first expect.
It would be like saying AI art won’t affect artists, because the people who would put in such little effort probably would never have commissioned anyone. Which may be a little true (at least in that it reduces the impact).
However, I don’t necessarily know if that’s true for software development. The ability to build software enabled huge business opportunities at very low costs. I think the key difference is this: the people who are now putting in such low effort into commissioning software maybe did hire software engineers before this, and that might throw off a lot of the numbers.
The work got easier, so what we do got more complex.
Becoming obsolete is a fear of people who are not willing or able to learn arbitrary problem domains in a short amount of time. In that case learning to use a particular tool will only get you so far. The real skill is being able to learn quickly (enthusiasm helps).
Gas powered pogo sticks, shoe fitting X-ray, Radium flavored chocolates, Apollo LLTV, table saws, Flex Seal for joining two halves of boats together, exorbitantly parallelized x86 CPU, rackable Mac Pro with M1 SoC, image generation AI, etc.
Tools can be useless, or be even dangerous.
It's all cleaning up slop code. Always has been.
</tired old fart voice>
More optimistically, you can think of "user created code" as an attempt at a design document of sorts; they were trying to tell you (and the computer) what they wanted in "your language". And that dialog is the important thing.
We can never decide, we just like learning, and there is little real, impactful research into programming as a business.
In two decades we will still collectively say ”we are learning so much”, ignoring that fact.
I know enough Japanese to talk like a small child, make halting small talk in a taxi, and understand a dining menu / restaurant signage broadly. I also have been enough times to understand context where literal translation to English fails to convey the actual message.. for example in cases where they want to say no to a customer but can't literally say no.
I have found Google Translate to be similarly magical and dumb for 15 years of traveling to Japan without any huge improvements other than speed. The visual real-time image OCR stuff was an app they purchased (Magic Lens?) that I had previously used.
So who knows, maybe LLM coding stays in a similar pretty-good-never-perfect state for a decade.
I think this is definitely a possibility, but I think the technology is still WAY too early to know that if the "second AI winter" the author references never comes, that we still wouldn't discover tons of other use cases that would change a lot.
Word Lens, by Quest Visual
https://en.wikipedia.org/wiki/Quest_Visual
I'm not sure how seriously people take the threat of non-coding vibe-coders. Maybe they should! The most important and popular programming environment in the world is the spreadsheet. Before spreadsheets, everything that is today a spreadsheet was a program some programmer had to write.
If AI coding improves productivity, it might move us closer to having 2X as much work as we can possibly do instead of 3X.
Nowadays we already have bullshit jobs that keep academics employed. Retraining takes several years.
With "AI" the danger is theoretically limited because it creates more bureaucracy and reduces productivity. The problem is that it is used as an excuse for layoffs.
The normal conversation is that productivity growth has slowed and the divide has increased, not that more productivity creates lower outcomes in real terms.
https://www.bls.gov/productivity/images/labor-compensation-l...
Data is collected through the National Compensation Survey: https://www.bls.gov/respondents/ncs/
They don't know the office politics, or go on coffee breaks with the team - humans still have more context and access. We still need people to manage the goals and risks, to constrain the AI in order to make it useful, and to navigate the physical and social world in their place.
The AI slop will make it harder for the small guys without marketing budget (some lucky few will still make it though). It will slowly kill the app ecosystem, untill all we reluctantly trust is FANG. The app pricing reflects it.
That is incorrect, sir.
First, because many problems were designed to fit into spreadsheets (human systems designed around a convenient tool). It is much more likely that several spreadsheets were _paper_ before, not custom written programs. For a lot of cases, that paper work was adapted directly to spreadsheets, no one did a custom program intermediate.
Second, because many problems we have today could be solved by simple spreadsheets, but they often aren't. Instead, people choose to hire developers instead, for a variety of reasons.
I say spreadsheet software replace paper.
That's the disagreement. You have intuition, I have sources:
https://en.wikipedia.org/wiki/Spreadsheet#Paper_spreadsheets
https://en.wikipedia.org/wiki/Spreadsheet#Electronic_spreads...
Meanwhile, the point of software development is not to write code. It's to get a working application that accomplishes a task. If this can be done, even at low quality, without hiring as many people, there is no more value to the human. In HN terms, there is no moat.
It's the difference between the transition from painting to photography and the transition from elevator operators to pushbuttons.
The other is about an "AI" website generator, spamming every video at the start.
I wonder what kind of honest efforts would require that kind of marketing.
I don't think the original point or your interpretation is correct.
AI will not cause a loss of software development jobs. There will still be a demand for human developers to create software. The idea that non-technical managers and executives will do so with AI tools is as delusional as it was when BASIC, COBOL, SQL, NoCode, etc. were introduced.
AI will affect the industry in two ways, though.
First, by lowering the skill requirements to create software it creates a flood of vibe coders competing for junior-level positions. This dilutes the market value of competent programmers, and makes entering the software industry much more difficult.
A related issue is that vibe coders will never become programmers. They will have the ability to create and test software, which will improve as and if AI tools continue to improve, but they will never learn the skills to debug, troubleshoot, and fix issues by actually programming. This likely won't matter to them or anyone else, however, but it's good to keep in mind that theirs is a separate profession from programming.
Secondly, it floods the software market with shoddy software full of bugs and security issues. The quality average will go down causing frustration for users, and security holes will be exploited increasing the frequency of data leaks, privacy violations, and unquantifiable losses for companies. All this will likely lead to a rejection of AI and vibe coding, and an industry crash not unlike the video game one in 1983 or the dot-com one in 2000. This will happen at the bottom of the Trough of Disillusionment phase of the hype cycle.
This could play out differently if the AI tools reach a level of competence that exceeds human senior software engineers, and have super-human capabilities to troubleshoot, fix, and write bug-free software. In that case we would reach a state where AI could be self-improving, and the demand for human engineers would go down. But I'm highly skeptical that the current architecture of AI tools will be able to get us there.
It's similar to vibe coding where the user truly doesn't know how to create the same thing themselves: You end up with output that you have no way of knowing it's correct, and you just have to blindly trust it and hope for the best. And that just doesn't work for many situations. So you still need expertise anyway (either yours or someone else's)
Only because you think the machine will be incorrect. If I have a large multiplication to do it's much easier to trust the calculator than a mathematician, and that's all due to my perception of the calculator accuracy.
So this test fails many times, and even when not, the translation is still usually not good. However, when you don’t care about nuances, it’s still fine usually. And also translation seems to be better if the text is larger. Translating a single word does almost always fail. Translating a whole article is usually fine. But even there it matters what you translate. Translating the Austrian subreddit fails quite often. Sometimes completely for whole large posts. Translating news is better.
e.g. "Set up database as: [read replica]" and "Database setup complete: [a replica was loaded.]" may translate into a same string in some language.
The goal of software is not to compile and run, it's to solve a problem for the user when running. If you don't know enough about the problem to be able to assess the output of the program then you can't trust that it's actually doing the right thing.
I've seen code generated by LLMs that runs, does something, and that the something would be plausible to a layperson, but that I know from domain knowledge and knowing the gotchas involved in programming that it's not actually correct.
I think the comparison with translation is a very apt one.
While professionals still produce much better quality translations, the demand for everything but the most sensitive work is nearly gone. Would you recommend your offspring get into the industry?
Now a professional service only needs a reviewer to edit what the LLM produces. You can even ask the LLM to translate in certain tones, dialects, etc. and they do it very well.
* Idioms (The article mentions in passing that this isn't so much a difficulty in Norwegian->English, but of course idioms usually don't translate as sentences)
* Cultural references (From arts, history, cuisine, etc. You don't necessarily substitute, but you might have to hint if it has relevant connotations that would be missed.)
* Cultural values (What does "freedom" mean to this one nation, or "passion" to this other, or "resilience" to another, and does that influence translation)
* Matching actor in dubbing (Sometimes the translation you'd use for a line of a dialogue in a book doesn't fit the duration and speaking movements of an actor in a movie, so the translator changes the language to fit better.)
* Artful prose. (AFAICT, LLMs really can't touch this, unless they're directly plagiarizing the right artful bit)
https://cloud.google.com/translate/docs/advanced/translating...
DeepL also has a translation LLM, which they claim is 1.4-1.7x better than their classic model: https://www.deepl.com/en/blog/next-gen-language-model
DeepL is not part of that point. Yes, maybe eventually, developers will lose their jobs to something that is an evolution of LLMs. But that’s not relevant here.
Translators are losing their jobs now though. Google translate wasn't very good for Japanese so a lot of people assumed that machine translation would never be a threat, but deepl was better to the point where a lot of translation moved to just cleaning up it's output and current state of the art llms as of the last six months are much better and can also be given context and other instructions to reduce the need for humans to clean up the output. When the dust settles translation as a job is probably going to be dead.
https://www.osnews.com/story/142469/that-time-ai-translation...
Google Translate is doing a bad job.
The Chrome translate function regularly detects Traditional Chinese as Japanese. While many characters are shared, detecting the latter is trivial by comparing unicode code points - Chinese has no kana. The function used to detect this correctly, but it has regressed.
Most irritatingly of all, it doesn't even let you correct its mistakes: as is the rule for all kinds of modern software, the machine thinks it knows best.
The syntaxes aren't just different but generally backwards, and, it's just my hunch but, they sometimes sound like they are confused about which modifies word which.
I find “I will have the potatoes” to be perfectly fine English and not haughty in the slightest. Is this a difference between British English and American English?
When asking someone to pass them to you, just imagine them turning to you, looking you in the eye, and asserting "I will have the potatoes" like it's some kind of ultimatum. Yes, that is strange.
I stopped saying, stuff "I would like a latte today" or more Midwestern (could I get a latte today etc) in singapore because people would just get confused. Same with being too polite when recieving things. There's ways to be polite but it usually involves less words because anything else confuses people.
Having grown up in the UK and living in Australia, I do not find Americans polite. To me, politeness is "please", "thank you", "may I have", etc, whereas "I would like a latte today" reads to me as a demand. In context it's fine (it stands out a bit but not in a problematic way), it's not particularly rude, but in general just stating your desires is not considered polite in my experience in UK/AU.
There are some other parts of American English that may be considered polite, I notice a lot of US folks addressing me as "sir" for example, but this sort of thing comes off as insincere or even sarcastic/rude.
I know this is how people communicate so they don't really bother me, I'm used to them and accept that different people have different expectations. I also understand that Americans might believe they are being polite and consider me to be rude, but I think this is why blanket statements like "Americans are extremely polite" are just missing so much cultural nuance.
That's news to me! In my part of the world I call even the janitorial staff "Sir". I'm not aware of anyone ever thinking that was rude.
I've never ordered a latte "today".
A simple statement of "I'd like a latte" fits fine in our regional vernacular.
I think that "I'd like a latte today" would appear a weird bit superfluous of specificity, unless it comes from a regular and familiar customer who might normally order something different and/or random.
I mean: "Today"? As opposed to later this week or something? It's implicit that the latte is wanted as soon as they can get around to it. (Unless yesterday's order was something different, and then adding "today" may make sense.)
---
But English is fucked up, and I'm getting old, so I've spent a lot of time observing (and sometimes producing) slight miscommunication.
In terms of things like ordering food or drink at a counter, it still works fine as long as nobody gets grumpy, and the desired item is produced.
I'm reminded of a local festival I went to as a little kid, with my parents, sometime. I was getting hungry and they were asking me what I wanted to eat. There were a lot of pop-up food vendor there, mostly with tables and stuff instead of the now-ubiquitous "food truck."
In the corner was a gyro stand with amazing-looking racks of spinny-meat. I wanted to try whatever that was.
The big banner said "GYROS" and we got in line.
Discussions were had between my parents about this "GYROS" concept, which they'd never seen before either. Was it a singular, or a plural? How many "GYROS" might a boy reasonable want? How was it pronounced? It looks like "gyro" as in "gyrocopter" but it seemed to them that it must be Greek, so they went through some variations on pronunciation as the line moved forward.
As we got closer, some of these questions were answered: The sign definitely referred to a plural offering, and seeing people leave it became clear that [unlike things like chorros or tacos or donuts] one was plenty.
And when we got to the front, the conversation went like this:
Parents: "Our son wants one of these... but we're not sure how to say it. Jye-roe? Hee-roe?"
"They're just gyros," he replied to them to them dismissively in a thick Greek accent, and motioned for his staff to produce 1 gyro.
And then the man looked at me, with his skin sweaty on that hot sunny day and a thick gold chain around his neck, and said to me in his very best and most careful English something to me that I can never forget. "I call them gyros. But listen to me, you can call them whatever you want. Jye-roe? Hee-roe? Yee-roe? Sandwich? Whatever you say, and however you say it: If you get what you expect, then you said it right. Alright?"
My trepidatious nods made sure that he was understood, and the awesome fucking sandwich-taco I had that day changed my entire perspective on food -- and ordering food -- forever.
So, sure: Ordering "one latte today"?
It sounds weird to me, but if it is understood and you get what you want, then it works. Correctness? Politeness? Whatever. Despite seeming weird: It works.
(Up next: There's a lot of ways to mispronounce approximately everything related to ordering pho, and none of them are wrong.)
To me (non-American) the above sounds like sarcasm, not politeness. Adding "today" and/or "actually" could mean you've had it with their delays.
I like to joke that Americans always seem to find ways to get offended by innocuous things, but in this case the joke is on me.
To me (Canadian, not American) "Could I get a latte today actually" sounded something like "Normally I get something other than a latter but actually today I would like a latte instead"
Not rude at all, but kind of assumes some context
To me it seems (without any context) that it might mean that you changed your mind about what day you wanted it. This does not seem to make sense in many contexts, though.
I want to believe there will be even more translators in the future. I really want to believe it.
Can you give us an example of a typical translation question and the "good prompting" required to make the LLM consider tone?
It includes a lot of steps and constant human evaluation between them, which implies that decisions about tone are ultimately made by whoever is prompting the LLM, not the LLMs themselves.
> "If they are generally in the style I want..."
> "choosing the sentences and paragraphs I like most from each..."
> "I also make my own adjustments to the translation as I see fit..."
> "I don’t adopt most of the LLM’s suggestions..."
> "I check it paragraph by paragraph..."
It seems like a great workflow to speed up the work of an already experienced translator, but far from being usable by a layman due to the several steps requiring specialized human supervision.
Consider the scenario presented by the blog post regarding bluntness/politeness and cultural sensitivities. Would anyone be able to use this workflow without knowing that beforehand? If you think about it, it could make the tone even worse.
Just like programming. And anything else "AI" assisted.
Helps experts type less.
Smaller companies that can afford smaller salaries you mean?
I'm not sure that is at odds with what the post you were replying to said
Fantastic closing paragraph! Loved the article
No, the one side is saying that all their code is written by LLMs already and that's why they think that. In fact, I would say the other side is the former ("it works for throwaway code but that's it") and that no-one is flat-out rejecting it.
https://kpopping.com/news/2025/Jun/17/Google-Translation-of-...
It's amazing more people aren't talking about this stuff, how incompetent do you have to be to allow racial slurs in translation, especially with all the weights towards being pro-diversity etc that already exist?
Just more enshittification from ol Sundar and the crew
LLM’s will tell you idioms, slang, and the point behind it
You can take a screenshot of telegram channels for both sides of a war conflict and get all context in a minute
In classic HN fashion I’m sure I missed the point, ok translators are still in demand got it.
Google Translate has been leapfrogged by the same thing that allows for “vibecoding”
> At the dinner table a Norwegian is likely to say something like “Jeg vil ha potetene” (literally “I will have the potatoes”, which sounds presumptuous and haughty in English). Google Translate just gives the blunt direct translation.
"To will" means "it is my will", i.e. to want to, which became the future tense in English. In Norwegian is still means "want", and Google Translate indeed translates it as "I want the potatoes." If you translate the rising (pleading) intonation on "potatoes", you then have an unwritten "please?", i.e. "I want the potatoes?...", which is passable English.
Most businesses think AI code is "good enough", and that machine translation is "good enough", which tanks the entire industry because there is now more supply than demand. He says there are still plenty of translator jobs, but then justifies it as because "it’s inadvisable to subsititute (sic) Google Translate for an interpreter at a court hearing." Meaning, the thing taking away all the tech jobs temporarily (unchecked mass-migration) is the same thing keeping keeping him employed temporarily.