We had a mandatory ChatGPT training course at work. You had to sign up in limited space classes. This is a large company, needless to say it was chaos to get a significant number of people to participate.
I got a spot. We were shown how to copy and paste data from excel and other data sources into the chat interface. We had sample data to work with, there was always someone in class who would say "mine didn't work." The developers in the room asked about codex, the instructor said she wasn't a developer.
We did get a certificate though. There was nothing they could teach that you couldn't learn by using the free version in your own time. Whatever they are doing with the Maltese government is just to increase the monthly active user count.
I’m now responsible for improving AI literacy in the organization I work.
But the people in charge just want the employees to just answer some questions so they can handover Claude or Chat GPT licenses so they can show people are using AI to improve productivity.
There are people who don’t know when to use AI and when not to use Ai and think they can just Claude their way through everything. I wanted to change that but when the whole idea is to just increase AI use I guess they don’t care about how AI is used.
The leaders who mandate AI have no understanding on how to actually use it for productivity. They use it like a Magic 8-Ball to confirm whatever ignorance they have and believe the hype that it can do anything.
They have always done this. These are the same managers who ask subordinates for reports that support their predetermined agendas, or higher level execs who hire consultants for the same purpose.
I was on a quarter demo the other day and the project lead for ai innovation was talking about the things he's preparing for the company.
I will not address the things he pitched (as coming soon), as I'm a developer and (hopefully) not the target audience, but I was quiet surprised when they made a questioneer asking how many people use ai and how frequently. (The target demographic was middle management, product owners etc)
75% of people answering said they're using it daily and considered it an essential tool they need to work
Considering it was anonymous I was expecting lower numbers, honestly.
In the recent past, my department received an email from on high with a list of people who were yet to complete the "anonymous" survey.
I always assume my work-survey answers are traceable back to me, whether it's via self-doxxing with my answers, tracing links of the rootkit-level MDM software that can record my screen, but they pinky-promise to only use for remote assistance, in case I open a ticket with IT.
Most external survey providers claimed anonymity but in their T&Cs stated in a very roundabout way that they could provide some information to customers for quality purposes or something. Read “we’ll deanonymize some users if the paying customer wants it”. Internal survey tools are subject to internal management pressure.
Even when you use a tool like Microsoft Forms, where MS really can’t be bothered to deanonimize users unless 3 letter agencies get involved, it’s still possible to do timestamp matching between the proxy/VPN logs and the submission time.
Asume real anonymity only if the URL is the same for everyone and you can fill the survey from any computer on the internet.
But the explanation for why people overhype AI usage is probably simpler. They want to keep their license because it’s a nice perk. They’ll use it to get the gist of a long email thread without bothering the read the details, to get some meeting minutes without validating if that was actually what was said, to generate some crappy modern equivalent of wordart graphics for their presentations, and feel like the time saved to generate what most time is slop was worth it.
When I worked on this (outside of coding) it was a pain to find a use case that really benefited. These were all niche uses that fit an LLM like a glove. These rest was slop, I could see the usage reports, and the BS self reporting surveys. Everyone inflated the numbers and usage to justify keeping their license.
It's perfectly possible. Two tables, one stores answer responses only, the other just marks off who has responded. No link between them and you have anonymous data but can tell who hasn't responded.
Of course if you record created/updated timestamps on both, insert both records in the same order, accidently record the user code in the response data, take backups in between responses, have identifying questions or just don't have that many people responding it's easy/not hard to reverse engineer.
But it's quite possible to do right, I did it quite effectively almost by mistake years ago. Sent a customer survey out with generated codes as identifiers recorded with answers. Before sending reminder emails a script grabbed the codes, marked the customer as responded and wiped the code (so I could just get future responses where code was not null to mark next people off). Although I had timestamps the script meant customers were updated in blocks, there really wasn't any data to link them.
I know because the Boss was not happy he couldn't find out which customer had said what, and I had to point out all the communication (with customers and me) called it an anonymous survey, so why would I have saved them?
So it is possible, just not easy even if you intend it, and it's often not intentional...
If the participant has to trust the survey creator, then it is not anonymous. The survey creator can link the data.
If the survey creator has to trust the participant, the survey is anonymous. The participant can lie in the survey, lie about participating, or submit the survey multiple times.
Your example was not anonymous. But you did not break the participant's trust, thank you! (Or maybe you are lying.)
Anonymous example:
Sending a clean link to people to take the survey.
If not enough answers have been received, a reminder can be sent to all, with a clause, that says: "if you have already done it, you can ignore the reminder."
It wasn't, and it was visibly updating while people were submitting their answers. I just rounded it as I don't remember the exact number at the time they closed the submission.
Could still be faked ofc, but I don't think they did.
> 75% of people answering said they're using it daily and considered it an essential tool they need to work
> (The target demographic was middle management, product owners etc)
This leaves a fairly wide set of options for what "essential" entails.
Do 75% of middle management and product owners actually need AI for their job? Seems unlikely.
Do 75% of middle management and product owners use AI to slop up emails, meeting "summaries", and reports? That's quite possible. Would they declare it to be an "essential tool"? One imagines they are not too fond of actually doing meaningful work.
It's quite easy to get high percentages like this when the AI is involved in make-work and the costs are low if not zero. The moment inference costs go up, most of this usage will evaporate.
Never expect anonymous voting/quizz/whatever to be fully anonymous in big corporations, if its something about touchy topics and/or can affect employment/performance of given person results will be skewed. If metric becomes the target it ceases to be a good metric and all that.
It all rest on the shoulders of responsible manager(s) on how moral they are. Many are not.
Not that there would be any ill effects from this, executives sit mostly in meetings, they don’t really do anything much besides that; maybe occasionally write a short email.
They also don’t have access to critical systems.
I agree with you, but also it’s not entirely unreasonable to just use AI (or any other tool) and let them figure out over time what are and aren’t good uses. This approach requires an ability to see past the next quarterly earnings report, which is a rare quality for a business, but it can be healthy. The long term result is likely to be a culture that is more AI literate than they would be if they had top down instruction. The optimal path is probably a bit of both, but if I had to pick a ditch it would be “trust my employees”.
The thing I have a real issue with, and which seems more common, is the belief that they can cut raises because AI will make them more productive. In that case, the best employees (read: those most capable of leveraging AI effectively) will leave to find better paying work and the remainder will be too busy with the additional workload to have time to figure out how to use AI to make themselves more efficient.
My saddest interaction recently was with a friend with a 1st class degree in computing and several years experience in software engineering in many prestigious companies.
I asked if he had tried out Claude code or anything similar.
His answer: My company has scheduled a training course in that so I'll wait
Bold claim. I have the opposite experience. On this site I imagine most folks will agree with you, but there are a lot of folks who choose to work at larger companies over small ones.
I have worked for enterprise companies all my life, they are all a horrible mess of people trying to play 4d chess to get a promotion and look good. They do offer better life / work balance, so if you are not a workaholic like the startup / SF crowd, then its actually a decent job. Just remember to enjoy the life outside the office, with people that are not from the office and you will be fine.
It's the VC mind rot, the only thing that matters in life is working apparently. A sad existence indeed but you need a cohort to exploit if you're gonna make the next unicorn for cashing out.
He is serious. He has an AI company with a vibe coded website.
All positive comments here come from the financially invested or the near-retirement people who need cognitive assistance and are willing to sell out future generations.
I’ve never gone through a paid training course that wasn’t a complete waste of time. It’s at the point where people at work know there isn’t even a point in offering these to me. “But why don’t you take the Terraform training?” Because I’m not going to waste my time with a 3-day course where it takes the first day to install and configure Terraform. I can install it on my computer in 5 minutes. I think people usually see these as a paid vacation, but I find them so insufferably boring I’d rather just work.
Yes! Forty years ago (c. 1985) members of our department of anesthesiology (University of Virginia Medical Center) were offered an optional two full day course on how to do our own MedLine searches so as not to have to put in a request to the biomedical librarian for same.
I jumped at the chance to not have to be in the OR from 7am-5pm doing the same old same old but instead relax and learn something useful.
Bad choice.
The instructor and material were deadeningly boring; I couldn't even begin to enter into the computer the right search request format and terms and as I sat there I was reminded of my days in elementary school watching the big round clock on the wall tick away the minutes until the final dismissal bell.
Because our chairman was in the class and had encouraged all of us tenure-track faculty to take the course, I couldn't bail after the first day but had to return for the second day.
Subsequently I continued using the biomedical librarian to request my searches (it took just a couple minutes to fill out the form) with excellent results.
As someone who never bothered to get any certificates (beyond a University degree) even when I'd do online courses (of which the most course-like must've been fast.ai), are these ever actually useful in any manner?
They are useful for getting a job, that’s about it.
In our case, we get our entire team AWS solution architect certs as well just so we can always tell our customers that our whole team is certified (we do a lot of “forward deployed” stuff for enterprise customers).
> We were shown how to copy and paste data from excel and other data sources into the chat interface.
Grnnnnnnnnnnnnnnnnnnguuurnnngh.
I remember the copy and paste drudgery from the early days of ChatGPT. It was a miserable and joyless experience. Nowadays (and for a long time) you can simply attach the file.
This sounds an awful lot like the early how to get on to the internet highway classes that existed. I don't think the classes had a lot of worth in the strict educational sense of like "here's how you do X Y & Z" but... We're I think much more effective at saying, you know, "X Y & Z are now possible."
It does take time and a little skill to know the edges of the AI tools. What's reasonable? What's not? What's likely to hallucinate? You could get something in the rough bounds of trust.
For everyone in the EU: Copying and pasting sensitive data (like customer data) into AI tools is a violation of the GDPR, and potentially the AI act, which will be enforced soon.
I would be cautious to advocate these laws that strongly in the context of AI tools:
Companies and employees always make their decisions based on a risk/reward basis.
Sometimes a commercial contract (like Microsoft Copilot) is enough to cover your ass and to meet the needs of the regulator.
Even if the operator is exactly the same.
Laws are constraints to navigate, but if you are successful enough (ahem, rich) then they don’t apply to you.
At the moment what the EU wants is to make sure that in the long-term they can access your private information.
Realistically if you are in the EU you have more risks telling your darkest secrets to a EU-hosted model that the government will arrest you, than to a Chinese-model (who doesn’t collaborate).
EU Chat Control, is here to protect kids and protect you from terrorists; you don’t want to claim you support pedophiles right ?
So following these rules is always a matter of choice.
Respect and you will be stuck with your shitty Mistral and no privacy, not respect and you have your shiny Claude that you have to think what to input inside.
I agree with you I could have made it more compact by making 1 point = 1 paragraph, sometimes it’s a bit difficult to cleanly articulate my ideas, and I try not to clean them up with GPT first in order to keep the original tone.
For the not liking it part, I guess that if someone writes a long text, there are more chances to find at least a point of disagreement than a very short sentence
It depends heavily on what type of data though. As far as I understand if you have no PII or anything close to it you are mostly safe - especially if it's customer data but aggregated.
You’re brushing too broad a stroke GDPR only affects personal information. There’s plenty of sensitive business information that is not covered by GDPR - for example per business customer revenue data - that is legal to put into an AI tool but your employer may not want you to.
Oh I don't know, it seems like a good step forward towards regulatory capture. First partner, then certify, then require the certification. A limited regional beta, like launching your app in New Zealand first.
But if you can prove any kind of success with Malta then you can go to the next 10 "slightly bigger" nations out there and tell them "See? It worked very well with Malta". And then move to a bigger layer, and a bigger layer...
In practice since this is valid for a year it is essentially a free-trial they are giving away and they hope that it may generate additional revenue at some point after that
This article gives a distorted view of my country. We have a largely normal government with normal levels of corruption. You can compare Malta to a municipality in a big country. I have lived in many north western European countries and at the municipal level I personally witnessed the same levels of corruption, that in my native Malta would be considered the biggest scandal of the year. It's all a matter of framing, propaganda and geopolitics.
> in many north western European countries and at the municipal level I personally witnessed the same levels of corruption, that in my native Malta would be considered the biggest scandal of the year.
Could you give examples of a few of these to illustrate the kind of thing we're talking about? It currently feels a bit hand wavy from both sides.
Most corruption that I have been witness to, both in Malta and abroad falls into two categories: nepotism and procurement/tenders. Both of these are extremely common to the point that most don't even consider them corruption. For context, I work in academia, so I am sort of by proxy employed with local/federal governments and am therefore privy to how procurement deals are made. There is nothing that happens in Malta that doesn't also happen in Germany or in any other country I have worked in. If anything in Malta someone is always going to cry foul, whereas if you're witnessing corruption at the municipal or state level in Germany, it mostly flies under the radar.
I am sympathetic to your desire to defend your country, which has many good things about it as well I am sure. But using Hacker News for solely this purpose is against the rules, so don't do that.
Respectfully, in the best case you don't really know what you are talking about. These three cities certainly cannot be categorised as tax havens in any sense of the word, and the local legislation is objectively different - as evident to anyone comparing it.
You didn’t even mention the Knights of Malta yet. They claim to care for the poor, but Prince Bernhard also within a year of signing up ordered naked girls for an evening which caused his secretary, J. Thomassen, in march 1950 to quit. Other Knights of Malta were other Dutch royals, 3 FBI directors, and the father of JFK.
Since the article is about Malta, the discussions here should be centered on Malta. Sure, everyone knows the US is corrupt as f*** but what relevance does that have here and now? There are plenty of other places on HN to bring up this "argument".
What was notable about Malta is not the topic of the article and particularly irrelevant now that corruption in Malta that helps Russia would not be notable.
The problem is that while the US and UK are well known countries with lots of press coverage, Malta isn't, so when you link one article in a foreign publication you are getting a highly distorted and politically charged view of a country you otherwise know nothing about.
US is becoming like russia, what a twist. Well, could be worse. Free democratic world will have to align around another leader, I hope it can be Europe but its far from sure.
The bad part is, this is not magically gonna disappear with next election, whoever wins. Literally whoever wins. Just like even decades after 9/11 many things have changed permanently, it will be similar with this. Proposers of some radical changes can be easily shot again, potus or not, or have some other variant of discrediting applied.
I wish I was joking, this is how majority of the world operates. Its very easy for those in power to drag their countries into this, out or weakness or convenience, but extremely hard to fully get out.
Europe is far more doomed, the US at least has upsides.
Europe has a downward trend, the Muslim are taking over and it's not doing anything remotely good, there is just no hope or future . Only beautiful historical architecture and beautiful nature and climate.
Laundering of CC/Trial Accounts/Enterprise LLM inference is already a HUGE market, leveraged in part for distillation attacks on western AI.
A whole country’s worth of accounts just got access to a service we know is being laundered en masse and is also the same tech currently propping up many economies at the moment.
That same country is known for laundering other forms of liquidity. This is par for the course, not propaganda. And it’s going to be a huge problem by November.
Hay, my nickname means “Russian”. And my real country of origin - Hungary was also not ideal until recently. I’ve heard a single instance in the past 40 years, that any of these caused any hatred showed up. And it didn’t even happen to me.
It’s nowhere near what my brother got because he has a darker complexion, and looks like a stereotypical Arab from far away.
So, are you sure that this hatred against the common folks?
Don’t want the hatred? Stop the biggest war in Europe since WW2, stop racketeering on a region level, stop acting victimized when there’s no one else to blame for your own actions and start finally working with other countries to build strong partnerships.
I don't understand why you are being downvoted. If someone starts calling people Avi Goldstein in a thread about Israel I’m sure he'll get flagged to death in no time.
I believe GP's intent was to point out that a company can get into an agreement with a government (make an offer they can't refuse), but a company cannot get a "region" to sign an agreement/contract with them.
That whole thread is absurd, but if I would have to answer I would say Great Britain is the name of the region/group of islands? Open to be wrong, I know little about the UK
Our only hope as a society in this aspect is to hope that in 2 years or so the prices of RAM fall to the lever where more people are able to use local SOTA models (which, at that time, should correspond to commercial SOTA models 6 months older.
Malta is among Europe's leading adopters. The country ranks first for workplace AI usage and third overall for general AI adoption. Beyond AI, Malta also has one of the highest rates of social media usage in Europe.
This initiative is less about AI literacy or OpenAI and more reflective of the Malta government's policy of making technology accessible across all society.
It's not the first time either, in the early 2000s a similar partnership with Microsoft which provided heavily subsidized microsoft office licenses.
Malta is also a very small country. It has a population of around 500,00 and area of around 310 km2. These are roughly the size of Atlanta, Georgia. It’s not hard to have a high adoption rate if your entire population is so small.
I'm not sure what this has to do with AI but press freedom is poorer than most western European countries and the prosecution has left a lot to be desired. There are rumors of a trial happening later this year.
As a local the Daphne case has been a sore point and something which makes me very angry and sad. However it's not the only country with debatable press freedom. It's somewhat difficult to know that my country has been reduced to a single bad episode.
The assassination of an investigative journalist and its subsequent handling is certainly a symptom of the island's embedded corruption. (And I say this as someone who likes the place!) When gambling and international finance make up >10% of the GDP, that's not unexpected. IIRC gambling and online gaming alone used to be >15% of Malta's economy but that balance has shifted over the past decade.
I'd think that the country's regressive anti-abortion laws are a bigger stain on its reputation. You can root out corruption. Moving the nation's Overton window towards a less illiberal stance tends to take a few generations.
Press freedom is absolutely not an issue. Certainly much much better than, say, the UK's where getting arrested for a tweet is common.
Regarding DCG's case: trial of the prime suspect due in July I believe. DCG's tragic case was a one-off. Attacks on democracy are much more common and frequent in other Western countries.
Reporters without Borders recently released Press Freedom Index 2026 puts Malta 67th, and the UK at 18. So no, certainly not much better - although looking at some of the historic data, it was better e.g. in 2010.
Outrageous, Malta is Europe, it needs an European provider, this is an European security issue. Malta need to align to European values as by agreement with EU.
I asked ChatGPT about corruption levels in Malta and whether the technology companies pay bribes to officials to get contracts. It was honest in its response and said Malta has serious corruption levels by EU standards.
That answers my wondering of why on earth all citizens require a paid service of chatGPT.
> why on earth all citizens require a paid service of chatGPT
If all citizens use ChatGPT as search engine instead of Google, their ability to access information and defend against fake news (as of right now) will be significantly improved and we only shift privacy concerns from Google to Open AI, which does seem like the lesser evil.
ChatGPt is not a facts/news service. It's output is canned by it's system prompt. Infact it's cut-off dates far more into the past compared to other AI services. So the "news" it dishes could be outdated unless it decides search web, in which case it can't be better than google.
is your comment from last year? Because ChatGPT now to me is just a machine that translates prompts to web searches. It's better than Google for unobscure results (which is the majority of searches) and also because there are no ads (yet). You can make the argument that it's not better than Google's AI Mode (not Google Search), but I personally prefer ChatGPT/Grok than AI Mode.
Original point stands that AI is useful and better than manual searches.
Do you think the average people rigorously query multiple angles and carefully read every one of the google results, to synthesize a well rounded viewpoint on any given topic? No, but with LLMs they can do that in one prompt.
Yes and even if you are paying, you’re still the product!
That’s whats happens in two sided markets. Everyone’s the product.
The original adage of “if you’re not paying, you’re the product” doesn’t necessarily rule out the converse. The fact that the grandfather comment made a freudian slip makes it funnier.
> The original adage of “if you’re not paying, you’re the product” doesn’t necessarily rule out the converse.
I believe the logical term "converse" means swapping the conclusion and the condition in a logical statement, ie converse(if A then B) = if B then A
So here the converse would be "if you're the product, you're not paying". Which doesn't exactly make sense to me as a claim to make here. Did you just mean to reinforce your first sentence? In which case, I think you mean "the inverse", not the converse. However, I have only used the word converse in a "formal logic" scope (proofs) so I'm not sure if it has a more flexible meaning in informal language use.
The converse and inverse are logically equivalent by contraposition, so it doesn't really matter which one you use. If you think through it, you can see that "if you're the product, you're not paying" is equivalent to "if you're paying, you're not the product".
Hear hear! And those who say paying is the only time one has a chance to not be the product should look at getting involved with a genuine charity or volunteer program too. There are no universal rules about this kind of thing.
Not long ago nearly everyone in the anglosphere had a habit to talk to a pastor, revealing all the dirty secrets under the veil of anonymity. The Church was an incredibly informed organization. Today OpenAI & Anthrophic are re-creating a parody on that tradition: people talk to an AI pastor, under the veil of anonymity, and divulge their darkest secrets.
Honestly depending on how it’s implemented the course could be really socially useful, both for establishing some baseline knowledge that could help avoid some of the pitfalls of too-credulous use of AI and for spurring people to innovate in their local businesses because they’ve been exposed to ideas earlier than would happen “naturally” as ideas just percolate through society
It's sobering to think that if every single person in Malta -- an entire country! -- signed up for ChatGPT and used it weekly, ChatGPT’s WAU would increase by only a few tenths of a percent.
The EU is probably footing the bill for most of it. Part of the funding Malta receives goes towards "digital projects" along with green tech and SME support.
Would be interesting long term if this sways public opinion about data centers in Malta. I do support though AI literacy in general and this is a good step. Would wonder about the deal in how much this is actually costing Malta if at all.
Malta makes money with igaming and money laundering. There's literally no other businesses there, other than basic necessities, and even these barely work. It's only focused on entertainment.
They import food and water. Malta is very hot during the summer. There's AC unit everywhere and it's a default cooling unit as well, as there's no "European winter" there. Everyone collects rain water and stores it on the roof.
They are one tsunami away from being decimated.
There's one company renting servers and it's full of online casinos, just so the companies meet the regulatory requirement.
Malta is the worst place on earth to have a data center I can think of.
Most of your comments have nothing to do with operating a data center, and seems like you have some ill feelings towards Malta. Tourism, manufacturing, financial services are all other industries apart from igaming. Parts of your phone are likely to be made in Malta.
Presumably what you refer to as "money laundering" is the impression that Malta attracts foreign investments by offering regulation for poorly regulated industries and tax incentives. Which is essential to maintain competitiveness as a small state. You'd be surprised to hear that most money laundering in Malta is not tied to the igaming industry at all.
Malta is not a good place for a data center because real-estate is expensive and cooling is expensive.
Tourism is relatively big there but only relatively to the population numbers. I'd argue gambling contributes more to the GDP, while tourism only keeps the light on on the economy. Tourism allows the citizens to make money but it's a small country so it doesn't scale. Gambling scales globally.
Tourism scales globally in the same way unless you mean the island would be at 100% occupancy rate already. Also my point is it's hard to trust you when you ignore the top export while talking about exports.
Unlikely. Other than the telcos there's only one proper commercial datacentre here. Space is very constrained and the electricity supply stability + summer heat aren't a fun combination
As a complete layman, I do wonder why you would bother building a datacenter at a place that everyone agrees is going to be basically underwater in the next 50-100 years.
OpenAI is inherently incentivized to sell as much LLM compute as possible, that is not neutral "AI literacy". You don't let tobacco companies make anti smoking education either.
Education written by the government about the risks, not the tobacco company about why cigarettes are great. Hence why the tobacco companies weren't keen on it.
Data centers in a country that has barely enough water and electricity for its citizens? That is utterly ridiculous. This AI hype is going crazy, it's all an insane joke, right?
I run some numbers, how much would it cost to build MaltaGPT - sovereign hosted ChatGPT.
Malta has a population of 500k. Let's assume 100k people use MaltaGPT daily, and they send an average of 10 messages per day, so roughly 1M messages per day. That averages 694 per minute, but at peak could be 3-5x that, so let's say 3000 per minute. Usage will of course vary by day of week and time of day (they could partner with a Pacific island and share inference hardware).
Those 3000 messages per minute translate to 50 messages per second. Let's say average prompt input is 5k tokens, and output is 500. So 250k tokens per second for prompt processing (let's ignore caching for simplicity) and 25k tokens per second for output decode.
If we take a 500B dense model, that concerts to roughly 1 trillion flops per token. So we need 250 petaflops per second of prompt processing and 25 petaflops for output decode. So 275 PFLOPS in compute.
That may sound like a lot, however a NVIDIA DGX B200 machine (8xB200) has a compute of 144 PFLOPS at FP4. That is assuming 100% efficiency which isn't really possible, and we also need to factor in memory usage which we would be limited more by than compute. So let's say we'd need 10 of them. For an entire country to have a sovereign version of ChatGPT.
The cloud cost to rent one machine is around $50/hour, so that would mean our cluster comes to $4.8m per year. However the list price of a machine is around €400k, so the price to buy the cluster outright would be around €5m (you need the rest of the data center too), with operating costs of around €500k per year.
showed the same reasoning to a fortune500 before moving to cloud (mind you, we already had the data centers paid for). didn't matter, went full on aws because we got a 40% discount on first year. something along the way of the bad decision triggered some exec bonus. so along the whole company went.
I am worried that a govt would encourage their own citizens to use a foreign service that uploads every professional and personal interactions to an AI companies servers, presumably to be trained on.
"Foreign service".. European "sovereignty" is more or less an empty word. Maybe citizens don't understand this fully, but governments do.. (this is why there is no outcry on what happens in Gaza, this is why there is a "consensus" that the Ukranian nazi fascists are not nazi fascists but.. well, Russians brought it upon themselves, they must not have their language and religion in Ukraine, otherwise it is a risk.., this is why European military bases are used in aggression against Iran, this is why "everyone agrees" that China is a lot worse than US, though objectively speaking this is false etc etc)
Seems like textbook Inside the Tornado marketing. Pick a country as a bowling pin, show some success, go for a different/bigger country. Presumably cover EU first this way. Be the first to offer all-citizens licenses.
Shady companies do this so that people get locked into the product and then come to expect it. Then the companies suck the blood out of the local and national government for every cent.
Too many people are stupid-stubborn and not even willing to pay 5$ a month for an online service that would help them greatly.
If a government thinks that ChatGPT even in its current form is a big boon, it makes sense to do this.
This is also insurance - your population gains LLM literacy and will have an advantage over other countries; even if it's only a tiny bit, at the level of states it adds up.
The subsidies deployed by the industry are so massive I don't even know if consumers need public assistance here. It's kinda like the gov was subsidizing web hosting or basic banking. The price for a regular consumer already barely hovers above zero.
Just look at this list of services included in Google's AI Pro subscription[1]. Google took everything it could think any consumer might need and bundled for $20/mo. There's even $10 GCP credit (that you can use for AI API calls).
It's a ploy to drive adoption. Once it's considered essential they can turn the screws in massive contracts with governments, big enterprises, universities, and public school systems. Probably some genuine competition on price, but the equilibrium price is probably below cost and not sustainable.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
This is at the “good to their users” phase - giving free access to an otherwise non-free product is being good to them, right?
If only it came with YouTube Premium... Most of that list is just AI in existing products which is not all that interesting. You get better value and models through ChatGPT or Claude especially if you are a developer
That's not close to "everything ... any consumer might need". It's a list of useless things, other than 5TB of storage. Granted, cloud storage typically sells for more than this, so they are offering Gemini for something like -$15/mo.
Can't imagine the size of brown envelope. Handing over your entire nation's thoughts to a foreign company operating under US Cloud Act in normal circumstances would be considered a risk to national security. Why not invest in home grown talent and companies?
Malta is part of the EU. I am personally very surprised about this partnership, just in the context of data security, privacy and the GDPR. How is the privacy of these EU citizens protected when all their prompts and data is sent to OpenAI? How do these EU citizens submit a request for all their personal data to be deleted from OpenAI records, a right they have under the GDPR with a compliant data processor?
Of course ChatGPT is available to EU citizens, should they choose to use it. That’s very different to the Maltese government actively promoting use of ChatGPT.
Of course Maltese citizens can still choose not to use ChatGPT (until it becomes mandatory), but if the State supported education is bound to one particular tool, storing user data outside the EU’s jurisdiction, I think that’s something to discuss.
ChatGPT is already available to users in the EU. It already has an EU-aligned terms of service. Not that I'd trust them, because the GDPR has been borderline useless in reality, but there's nothing particularly legally interesting about this offering.
> How do these EU citizens submit a request for all their personal data to be deleted from OpenAI records
Probably by sending an e-mail to a designated address, like most services that operate in the EU, but you can read their TOS if you'd like to be sure.
I mean, it's just a literal non-event legally. I'm repeating myself here, but OpenAI already operates in the EU. EU users can already use ChatGPT, with some assurances about adhering to GDPR. Offering the ad-free tier to a subset of EU users for free, who could already use the tier with ads for free, doesn't change anything legally in regards to data processing.
If you want my commentary on the political context, obviously I think it's not very intelligent for nations to be trusting a US corporation with all of their citizens' data. I think the most impactful use of LLMs is going to be their usage as surveillance and propaganda tools, so this is probably not a prudent decision. But legally, as pertains to GDPR, this is not different from the status quo in any way.
- Malta is selling passports and harboring criminals who kill journalists (we all remember Daphne Caruana Galizia don't we?).
- buying votes/parties there would get you 10 times the MEPs you get in Germany or France.
- their mayors can veto EU policy... This EU-thing really is democratic!
so: I doubt anyone has to care about that pesky GDPR if they buy the government of Malta.
It's an interesting way to control the population.. let them delegate thinking to systems, and then just control the systems to respond to your (you = government) preference.
My analogy is using AI is like using a navigation system, you can end up delegating everything to it and drive into a river...
I certainly hope our government is not doing anything similar. I don't want them to pay millions to a foreign company like OpenAI and basically kill competition with that.
> "Malta’s AI for All initiative will offer people of all backgrounds the opportunity to learn how AI can be used responsibly through a course developed by the University of Malta. The course is designed to help people understand what AI is, what it can and can’t do, and how to use it responsibly at home and work. After the course is completed, citizens can access ChatGPT Plus for one year at no cost to them."*
Not sure who is more corrupt these days: governments or big corporations.
What’s dangerous here is people will eventually stop thinking critically altogether, about anything, and their view of the world will be based on what they are shown on AI apps.
Ugh, gives bad flashbacks of when Facebook did free access to all citizens in particular countries. Later only to harvest data and manipulate elections.
I wonder who is paying the bill for a nationwide OpenAI subscription. Is OpenAI doing this so they can show present (and potential) shareholders an increase in the number of users and ask for more money?
I used to work for a hosting company, and all the shady business like exploitation of children and sex workers came from there unfortunately. But that’s because people move their business there for legal reasons, not because of their residents I assume.
Everyone in Malta could already, before this deal/plan, and even without it now, use ChatGPT (or any other LLM model/service, whether free or premium.)
They are saying that the product is already available then implying a government deal on behalf of all citizens doesn't matter because the product is already available.
Anyone can use ChatGPT for free already. The vast majority of people using AI as a search engine alternative/chatbot never have any reason to pay. You don't even need an account.
I’m personally not a fan of OpenAI always referring to their model as “providing intelligence as a utility.” Sounds very condescending, are you saying this isn’t something we already have? If that’s the opinion, may be good to reflect on how the models were trained. On millions upon millions of books which no authors were compensated for.
But that’s besides the point, the whole initiative is self-defeating by design. This isn’t like power, it’s something humans do inherently possess, this is simply a way to amplify what already exists. Intelligent people using AI generally seem to be more productive than when they don’t use it, and lazy or unintelligent people generally see cognitive decline, at least based on what I’ve heard online but I could be wrong on that.
So saying “this is where you get intelligence” is both false marketing and destructive to OpenAI as a company, since by all definitions, it isn’t true.
> I’m personally not a fan of OpenAI always referring to their model as “providing intelligence as a utility.” Sounds very condescending, are you saying this isn’t something we already have?
Your body also generates electricity and natural gas. Do you also get upset when energy companies claim to provide these services as a utility?
"Humans also produce farts" is a new low. Can the AI people be interned or moved to some seasteading libertarian hellhole so the rest of us can live a normal life?
My brother is actually moving to one (although that's not the core focus of the community, but they are extremely sceptical of AI there).
I suspect in a few years it's going to be strange to talk to him and other people there. It's already hard to explain to people that "Yeah, you can have a phone call and it can sound like your dad but it might just be a chat bot."
>I’m personally not a fan of OpenAI always referring to their model as “providing intelligence as a utility.” Sounds very condescending, are you saying this isn’t something we already have?
We do and we don't. If you would go out there and talk to a random person about elliptic curves and matrix multiplications and whether you hit a performance ceiling in a specific 2x2 multiplication thingy with Karatsuba and wnaf, they would not know half the words, but the lying and flattering machine will be able to hold the conversation.
The thing will not get all things right and bullshit me about DSTU4145 using normal basis, will lie about A being set to 1 for all standard curves, but it's definitely more intelligence that you can get from a taxi driver.
If it's not general superintelligence right there for five bucks a piece, I don't know what is
These philosophical questions are decades if not older https://en.wikipedia.org/wiki/Chinese_room
And the answer is "depends on who you ask and how many capabilities it has"
Does the prayer by a kafir not knowing the language in which the prayer is recited get forgiveness?
I mean, what's the point of this question even. The thing is either useful or fun or it's not. I personally think the whole AI is the work of devil tempting us, but some people would say that about pork sausages and Paulaner and I like my pork sausages with Paulaner.
> We do and we don't. If you would go out there and talk to a random person about elliptic curves and matrix multiplications and whether you hit a performance ceiling in a specific 2x2 multiplication thingy with Karatsuba and wnaf, they would not know half the words, but the lying and flattering machine will be able to hold the conversation.
Then perhaps their signalling isn't meant for you but for people who have to pay those pesky expensive intelligent people like translators, programmers, designers and writers. Those people would benefit greatly if they could rent intelligence much cheaper from companies like OpenAI.
How?? Are you saying there's a lot of silent AI-boosters on HN voting it down despite almost every single comment here being non-obsequious? Looks like your model of reality has detached from modelling reality.
For OpenAI because they get a lot of money and and for the government because they can keep tabs on how people use LLMs to make sure they're not doing anything naughty.
The fact that a nation provides free access to SOTA models to all his citizen via this partnership, I mean it's not something I have seen before, therefore I find it interesting, also Malta is not too far from me.
I got a spot. We were shown how to copy and paste data from excel and other data sources into the chat interface. We had sample data to work with, there was always someone in class who would say "mine didn't work." The developers in the room asked about codex, the instructor said she wasn't a developer.
We did get a certificate though. There was nothing they could teach that you couldn't learn by using the free version in your own time. Whatever they are doing with the Maltese government is just to increase the monthly active user count.
But the people in charge just want the employees to just answer some questions so they can handover Claude or Chat GPT licenses so they can show people are using AI to improve productivity.
There are people who don’t know when to use AI and when not to use Ai and think they can just Claude their way through everything. I wanted to change that but when the whole idea is to just increase AI use I guess they don’t care about how AI is used.
I will not address the things he pitched (as coming soon), as I'm a developer and (hopefully) not the target audience, but I was quiet surprised when they made a questioneer asking how many people use ai and how frequently. (The target demographic was middle management, product owners etc)
75% of people answering said they're using it daily and considered it an essential tool they need to work
Considering it was anonymous I was expecting lower numbers, honestly.
In the recent past, my department received an email from on high with a list of people who were yet to complete the "anonymous" survey.
I always assume my work-survey answers are traceable back to me, whether it's via self-doxxing with my answers, tracing links of the rootkit-level MDM software that can record my screen, but they pinky-promise to only use for remote assistance, in case I open a ticket with IT.
Trusting that process to be done well is probably not the greatest plan.
That’s not anonymous at that point. That’s an agenda.
Even when you use a tool like Microsoft Forms, where MS really can’t be bothered to deanonimize users unless 3 letter agencies get involved, it’s still possible to do timestamp matching between the proxy/VPN logs and the submission time.
Asume real anonymity only if the URL is the same for everyone and you can fill the survey from any computer on the internet.
But the explanation for why people overhype AI usage is probably simpler. They want to keep their license because it’s a nice perk. They’ll use it to get the gist of a long email thread without bothering the read the details, to get some meeting minutes without validating if that was actually what was said, to generate some crappy modern equivalent of wordart graphics for their presentations, and feel like the time saved to generate what most time is slop was worth it.
When I worked on this (outside of coding) it was a pain to find a use case that really benefited. These were all niche uses that fit an LLM like a glove. These rest was slop, I could see the usage reports, and the BS self reporting surveys. Everyone inflated the numbers and usage to justify keeping their license.
Of course if you record created/updated timestamps on both, insert both records in the same order, accidently record the user code in the response data, take backups in between responses, have identifying questions or just don't have that many people responding it's easy/not hard to reverse engineer.
But it's quite possible to do right, I did it quite effectively almost by mistake years ago. Sent a customer survey out with generated codes as identifiers recorded with answers. Before sending reminder emails a script grabbed the codes, marked the customer as responded and wiped the code (so I could just get future responses where code was not null to mark next people off). Although I had timestamps the script meant customers were updated in blocks, there really wasn't any data to link them.
I know because the Boss was not happy he couldn't find out which customer had said what, and I had to point out all the communication (with customers and me) called it an anonymous survey, so why would I have saved them?
So it is possible, just not easy even if you intend it, and it's often not intentional...
I don't trust anonymous surveys either now...
If the participant has to trust the survey creator, then it is not anonymous. The survey creator can link the data.
If the survey creator has to trust the participant, the survey is anonymous. The participant can lie in the survey, lie about participating, or submit the survey multiple times.
Your example was not anonymous. But you did not break the participant's trust, thank you! (Or maybe you are lying.)
Anonymous example: Sending a clean link to people to take the survey. If not enough answers have been received, a reminder can be sent to all, with a clause, that says: "if you have already done it, you can ignore the reminder."
The job market right now sucks so everyone is really just trying to not be the next cut.
Could still be faked ofc, but I don't think they did.
> (The target demographic was middle management, product owners etc)
This leaves a fairly wide set of options for what "essential" entails.
Do 75% of middle management and product owners actually need AI for their job? Seems unlikely.
Do 75% of middle management and product owners use AI to slop up emails, meeting "summaries", and reports? That's quite possible. Would they declare it to be an "essential tool"? One imagines they are not too fond of actually doing meaningful work.
It's quite easy to get high percentages like this when the AI is involved in make-work and the costs are low if not zero. The moment inference costs go up, most of this usage will evaporate.
It all rest on the shoulders of responsible manager(s) on how moral they are. Many are not.
But maybe the simplest answer is that most people do use the tools daily now and consider them essential...?
As much as HN would hate to think that
Why not?
It’s much more of an issue with devs
The thing I have a real issue with, and which seems more common, is the belief that they can cut raises because AI will make them more productive. In that case, the best employees (read: those most capable of leveraging AI effectively) will leave to find better paying work and the remainder will be too busy with the additional workload to have time to figure out how to use AI to make themselves more efficient.
Fuck all of this.
https://www.npr.org/2012/05/03/151860154/put-away-the-bell-c...
I asked if he had tried out Claude code or anything similar. His answer: My company has scheduled a training course in that so I'll wait
:(
That's the hint. Most companies >50 employees suck.
All positive comments here come from the financially invested or the near-retirement people who need cognitive assistance and are willing to sell out future generations.
I jumped at the chance to not have to be in the OR from 7am-5pm doing the same old same old but instead relax and learn something useful.
Bad choice.
The instructor and material were deadeningly boring; I couldn't even begin to enter into the computer the right search request format and terms and as I sat there I was reminded of my days in elementary school watching the big round clock on the wall tick away the minutes until the final dismissal bell.
Because our chairman was in the class and had encouraged all of us tenure-track faculty to take the course, I couldn't bail after the first day but had to return for the second day.
Subsequently I continued using the biomedical librarian to request my searches (it took just a couple minutes to fill out the form) with excellent results.
As someone who never bothered to get any certificates (beyond a University degree) even when I'd do online courses (of which the most course-like must've been fast.ai), are these ever actually useful in any manner?
Many of them you can simply take the exam over and over until you pass, and then stick a shiny stupid badge on your LinkedIn profile.
In our case, we get our entire team AWS solution architect certs as well just so we can always tell our customers that our whole team is certified (we do a lot of “forward deployed” stuff for enterprise customers).
Grnnnnnnnnnnnnnnnnnnguuurnnngh.
I remember the copy and paste drudgery from the early days of ChatGPT. It was a miserable and joyless experience. Nowadays (and for a long time) you can simply attach the file.
It does take time and a little skill to know the edges of the AI tools. What's reasonable? What's not? What's likely to hallucinate? You could get something in the rough bounds of trust.
I can see a class helping with that.
These violations come with hefty fines.
Companies and employees always make their decisions based on a risk/reward basis.
Sometimes a commercial contract (like Microsoft Copilot) is enough to cover your ass and to meet the needs of the regulator.
Even if the operator is exactly the same.
Laws are constraints to navigate, but if you are successful enough (ahem, rich) then they don’t apply to you.
At the moment what the EU wants is to make sure that in the long-term they can access your private information.
Realistically if you are in the EU you have more risks telling your darkest secrets to a EU-hosted model that the government will arrest you, than to a Chinese-model (who doesn’t collaborate).
EU Chat Control, is here to protect kids and protect you from terrorists; you don’t want to claim you support pedophiles right ?
So following these rules is always a matter of choice.
Respect and you will be stuck with your shitty Mistral and no privacy, not respect and you have your shiny Claude that you have to think what to input inside.
EDIT: Needless to say I loath it and I don't know why.
For the not liking it part, I guess that if someone writes a long text, there are more chances to find at least a point of disagreement than a very short sentence
Usually this would require the respective customer to agree to sharing that data with a third party.
I'd be surprised if there weren't already phishing attacks that work by pretending to be a LLM.
But nobody uses LLMs that aren't Gemini or Copilot enterprise, as they are already on Google cloud or Microsoft offering already.
And there's high pressure on workers to find use cases where AI can boost productivity, with bonuses dangling on who finds real case scenarios.
I don't know about the results of these experiments, but I know unhappiness is widespread.
"Make the AI do xyz"
That clearly needs a custom harness to integrate with ORG tooling.
"No we won't pay for token usage, make it work with the subscription were already paying for"...
Guess you don't want AI then...
Cue up the next cynical bad take.
That's true for almost everything in life.
> Whatever they are doing with the Maltese government is just to increase the monthly active user count.
That's one of their main goals. Another main goal is to also make money. There are a few other main goals.
What do you mean by "just to increase"? Did they try to hide their goal? Was it a secret agenda nobody knows about?
These are some strange tautological comments.
"Malta’s corruption is not just in the heart of government, it’s the entire body"
https://www.theguardian.com/commentisfree/2019/dec/03/malta-...
Could you give examples of a few of these to illustrate the kind of thing we're talking about? It currently feels a bit hand wavy from both sides.
Your username and comment history suggests it might not be wise to take your word as objective truth.
That’s quite a difference to most other European countries, although not all.
That’s quite a difference to most other European countries
The difference being that you personally are not prejudiced against them for no other reason than your ignorance and arrogance.
https://trap.org.ua/en/publications/parisian-quarters-monten...
https://www.icij.org/news/2024/10/french-authorities-seize-7...
https://fakti.bg/en/imoti/927878-the-luxury-properties-of-th...
https://en.wikipedia.org/wiki/Russian_money_in_Malta
The bad part is, this is not magically gonna disappear with next election, whoever wins. Literally whoever wins. Just like even decades after 9/11 many things have changed permanently, it will be similar with this. Proposers of some radical changes can be easily shot again, potus or not, or have some other variant of discrediting applied.
I wish I was joking, this is how majority of the world operates. Its very easy for those in power to drag their countries into this, out or weakness or convenience, but extremely hard to fully get out.
A whole country’s worth of accounts just got access to a service we know is being laundered en masse and is also the same tech currently propping up many economies at the moment.
That same country is known for laundering other forms of liquidity. This is par for the course, not propaganda. And it’s going to be a huge problem by November.
It’s nowhere near what my brother got because he has a darker complexion, and looks like a stereotypical Arab from far away.
So, are you sure that this hatred against the common folks?
Seriously? Is all that you’ve got?
Don’t want the hatred? Stop the biggest war in Europe since WW2, stop racketeering on a region level, stop acting victimized when there’s no one else to blame for your own actions and start finally working with other countries to build strong partnerships.
https://www.anthropic.com/news/anthropic-and-iceland-announc...
https://openai.com/global-affairs/openai-for-greece/
I'm just wondering when the US will finally put boots on the ground in Iran...
Regions can sign deals too…?
I have a president in my neighborhood's polka club too.
Thank god people protested against it and made them drop the plan.
EU software also collects your data
At least American software is good
If you’re gonna give up your data, at least use the better American software
Malta is among Europe's leading adopters. The country ranks first for workplace AI usage and third overall for general AI adoption. Beyond AI, Malta also has one of the highest rates of social media usage in Europe.
This initiative is less about AI literacy or OpenAI and more reflective of the Malta government's policy of making technology accessible across all society.
It's not the first time either, in the early 2000s a similar partnership with Microsoft which provided heavily subsidized microsoft office licenses.
As a local the Daphne case has been a sore point and something which makes me very angry and sad. However it's not the only country with debatable press freedom. It's somewhat difficult to know that my country has been reduced to a single bad episode.
I'd think that the country's regressive anti-abortion laws are a bigger stain on its reputation. You can root out corruption. Moving the nation's Overton window towards a less illiberal stance tends to take a few generations.
Regarding DCG's case: trial of the prime suspect due in July I believe. DCG's tragic case was a one-off. Attacks on democracy are much more common and frequent in other Western countries.
https://rsf.org/en/index
Closed software has no place in government whatsoever. It should all be open and/or locally run, and ideally GPL/AGPL licensed.
It is another level of stupidity/moral failing to use closed software supplied from outside your nation, though.
That answers my wondering of why on earth all citizens require a paid service of chatGPT.
If all citizens use ChatGPT as search engine instead of Google, their ability to access information and defend against fake news (as of right now) will be significantly improved and we only shift privacy concerns from Google to Open AI, which does seem like the lesser evil.
Original point stands that AI is useful and better than manual searches.
Do you think the average people rigorously query multiple angles and carefully read every one of the google results, to synthesize a well rounded viewpoint on any given topic? No, but with LLMs they can do that in one prompt.
That’s whats happens in two sided markets. Everyone’s the product.
The original adage of “if you’re not paying, you’re the product” doesn’t necessarily rule out the converse. The fact that the grandfather comment made a freudian slip makes it funnier.
I believe the logical term "converse" means swapping the conclusion and the condition in a logical statement, ie converse(if A then B) = if B then A
So here the converse would be "if you're the product, you're not paying". Which doesn't exactly make sense to me as a claim to make here. Did you just mean to reinforce your first sentence? In which case, I think you mean "the inverse", not the converse. However, I have only used the word converse in a "formal logic" scope (proofs) so I'm not sure if it has a more flexible meaning in informal language use.
EU-created domestic software is no better
Actually it’s worse. You still surrender your data but get a worse user experience
Not sure why a bribe would’ve involved: it’s a win-win situation, especially near elections.
> Who can register: > > Maltese citizens and residents > Must have an active eID account > No previous AI knowledge is required
From https://mdia.gov.mt/services/ai-for-all-ai-ghal-kulhadd/
Yes, ChatGPT has a large user base, but Malta's not a particularly meaningful datapoint in terms of the population size.
They import food and water. Malta is very hot during the summer. There's AC unit everywhere and it's a default cooling unit as well, as there's no "European winter" there. Everyone collects rain water and stores it on the roof.
They are one tsunami away from being decimated.
There's one company renting servers and it's full of online casinos, just so the companies meet the regulatory requirement.
Malta is the worst place on earth to have a data center I can think of.
Presumably what you refer to as "money laundering" is the impression that Malta attracts foreign investments by offering regulation for poorly regulated industries and tax incentives. Which is essential to maintain competitiveness as a small state. You'd be surprised to hear that most money laundering in Malta is not tied to the igaming industry at all.
Malta is not a good place for a data center because real-estate is expensive and cooling is expensive.
I have no numbers to back this up.
[1] https://timesofmalta.com/article/the-15-billion-question-is-...
[2] https://www.emcs.com.mt/emcs-tourism-malta-economy/
Many jurisdictions literally force them to put education on the boxes.
[0] https://en.wikipedia.org/wiki/Facebook_Zero
Malta has a population of 500k. Let's assume 100k people use MaltaGPT daily, and they send an average of 10 messages per day, so roughly 1M messages per day. That averages 694 per minute, but at peak could be 3-5x that, so let's say 3000 per minute. Usage will of course vary by day of week and time of day (they could partner with a Pacific island and share inference hardware).
Those 3000 messages per minute translate to 50 messages per second. Let's say average prompt input is 5k tokens, and output is 500. So 250k tokens per second for prompt processing (let's ignore caching for simplicity) and 25k tokens per second for output decode.
If we take a 500B dense model, that concerts to roughly 1 trillion flops per token. So we need 250 petaflops per second of prompt processing and 25 petaflops for output decode. So 275 PFLOPS in compute.
That may sound like a lot, however a NVIDIA DGX B200 machine (8xB200) has a compute of 144 PFLOPS at FP4. That is assuming 100% efficiency which isn't really possible, and we also need to factor in memory usage which we would be limited more by than compute. So let's say we'd need 10 of them. For an entire country to have a sovereign version of ChatGPT.
The cloud cost to rent one machine is around $50/hour, so that would mean our cluster comes to $4.8m per year. However the list price of a machine is around €400k, so the price to buy the cluster outright would be around €5m (you need the rest of the data center too), with operating costs of around €500k per year.
So per citizen: €10 upfront and €1 per year.
At least the usage data would stay sovereign even if the model was trained somewhere else.
If a government thinks that ChatGPT even in its current form is a big boon, it makes sense to do this.
This is also insurance - your population gains LLM literacy and will have an advantage over other countries; even if it's only a tiny bit, at the level of states it adds up.
No, not gonna pay yet another tax, mr. Taxman.
Just look at this list of services included in Google's AI Pro subscription[1]. Google took everything it could think any consumer might need and bundled for $20/mo. There's even $10 GCP credit (that you can use for AI API calls).
[1] https://support.google.com/googleone/answer/14534406?hl=en
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
This is at the “good to their users” phase - giving free access to an otherwise non-free product is being good to them, right?
Nobody is obligated to use it. It just moves the price to $0 for people in Malta who choose to use it. Same service.
Of course Maltese citizens can still choose not to use ChatGPT (until it becomes mandatory), but if the State supported education is bound to one particular tool, storing user data outside the EU’s jurisdiction, I think that’s something to discuss.
> How do these EU citizens submit a request for all their personal data to be deleted from OpenAI records
Probably by sending an e-mail to a designated address, like most services that operate in the EU, but you can read their TOS if you'd like to be sure.
Care to elaborate or we have become completely apathetic to any display of sleaze?
If you want my commentary on the political context, obviously I think it's not very intelligent for nations to be trusting a US corporation with all of their citizens' data. I think the most impactful use of LLMs is going to be their usage as surveillance and propaganda tools, so this is probably not a prudent decision. But legally, as pertains to GDPR, this is not different from the status quo in any way.
so: I doubt anyone has to care about that pesky GDPR if they buy the government of Malta.
My analogy is using AI is like using a navigation system, you can end up delegating everything to it and drive into a river...
More like off the cliff!
So it’s basically I giant government-sponsored free trial.
snort
On the taxpayer money.
If you force me to pay taxes, and you offer me free access to inference, I don't see why I would run my local, privacy-focus model.
What’s dangerous here is people will eventually stop thinking critically altogether, about anything, and their view of the world will be based on what they are shown on AI apps.
Next, force an eyeball scan on the peasant population.
Malta has a population of only 550k.
Everyone in Malta could already, before this deal/plan, and even without it now, use ChatGPT (or any other LLM model/service, whether free or premium.)
I'm Maltese so feel free to be as detailed as needed.
So the fact that you get it free after doing some basic due diligence is actually a big deal in the local context.
But that’s besides the point, the whole initiative is self-defeating by design. This isn’t like power, it’s something humans do inherently possess, this is simply a way to amplify what already exists. Intelligent people using AI generally seem to be more productive than when they don’t use it, and lazy or unintelligent people generally see cognitive decline, at least based on what I’ve heard online but I could be wrong on that.
So saying “this is where you get intelligence” is both false marketing and destructive to OpenAI as a company, since by all definitions, it isn’t true.
Your body also generates electricity and natural gas. Do you also get upset when energy companies claim to provide these services as a utility?
Does AI actually provide intelligence?
I suspect in a few years it's going to be strange to talk to him and other people there. It's already hard to explain to people that "Yeah, you can have a phone call and it can sound like your dad but it might just be a chat bot."
If I never have to hear anything about AI ever again it will be too soon
We do and we don't. If you would go out there and talk to a random person about elliptic curves and matrix multiplications and whether you hit a performance ceiling in a specific 2x2 multiplication thingy with Karatsuba and wnaf, they would not know half the words, but the lying and flattering machine will be able to hold the conversation.
The thing will not get all things right and bullshit me about DSTU4145 using normal basis, will lie about A being set to 1 for all standard curves, but it's definitely more intelligence that you can get from a taxi driver.
If it's not general superintelligence right there for five bucks a piece, I don't know what is
Is a calculator intelligent? I can 'talk' to it via pushing buttons.
Proof that we reached AGI 50 years ago
I mean, what's the point of this question even. The thing is either useful or fun or it's not. I personally think the whole AI is the work of devil tempting us, but some people would say that about pork sausages and Paulaner and I like my pork sausages with Paulaner.
Wikipedia has existed for decades...
Lol, they are literally just promising to make people fungible. Tale as old as time.
That is how AI boosterism works here.
Tech employees worried their stock will drop in value (laugh emoji)
For OpenAI because they get a lot of money and and for the government because they can keep tabs on how people use LLMs to make sure they're not doing anything naughty.
EU created software like Mistral is no different
come on now, surely you're not serious.
I run local models. They're fun to play with. I get a bit of a dopamine hit when it works.
They're selling addiction. This is fucking disgusting.
I can’t.