I always sigh when I see these threads on HN because many of the comments (although not all, thankfully) devolve into US / EU name-calling and wild overgeneralisations.
I would really love to see a Q&A thread like https://news.ycombinator.com/item?id=42770125 from someone who's actually read the documents, practices law in the area, and also understands the difference between US and EU law.
frantzmiccoli 14 hours ago [-]
Not a lawyer, not versed in US and EU Law, but ... I read (part) of the regulation.
Not a lawyer, only an engineer starting to assess our AI models.
Your comparison to GDPR seems to be correct in a way, both are quite vague and wide. The implementation of GDPR is still unclear in certain situations and it was even worse when it was launched, the EU AI act have very little references to work with and except for very obvious area it is still a lot of a guesswork
pjc50 10 hours ago [-]
I agree with this. It is horrendously vague, like GDPR. This creates a large "wariness zone" which law-abiding people avoid, while large multinationals can steamroller through until the point of direct confrontation. And even then you get things like Microsoft Safe Harbour.
On the other hand, if you're concerned about AI risk, I don't see how it could be otherwise. We don't have a clear grasp about what the real limits of capabilities are. Some people are promising "AGI" "just around the corner". Other people are spinning tales about gray goo. The risk of automated discrimination looms large since IBM sold Hollerith collation machines to the Holocaust.
If it delays AI "innovation" by forcing only the deployment of solutions which have had at least some check by legal to at least try to avoid harming citizens, that's ... good?
tw04 12 hours ago [-]
When a law is “vague” in that it intentionally tries to be overly broad in protecting the average citizen from corporations, that’s a good thing. GDPR is very much meant to scare the facebooks of the world whose default modus operandi is: your privacy means nothing, I have a revenue number to hit and I don’t care if it ruins your life in the future.
I WANT it to be difficult for AI companies to steal other people’s hard work just like I WANT Facebook to have to spend millions of dollars on lawyers to make sure whatever data they’re collecting and sharing about me doesn’t violate my rights.
12 hours ago [-]
miohtama 12 hours ago [-]
The problem is that the GDPR has been largely a failure protecting citizens from corporations, but it has hurt everyone else.
- Nothing has changed in Facebook and Google data collection practices, who with other bug corps account for > 90% of data collection
- Many mid tier competitors lost market share, focusing power to Google
- EU small software companies pay estimated extra 400 EUR/year to satisfy GDPR compliance with little tangible benefits to the EU citizens.
It's called unintended consequences. We all want Zuckerberg to collect less data, but how GDPR was implemented is that it mostly hurt small businesses disproportionately. E.g. you now need to hire a lawyer to analyse if you can collect an IP address and for what purposes, as discussed here.
verzali 12 hours ago [-]
I will be honest, I am always very skeptical of these claims that the big tech companies are fine but small business is hurting. Many of them seem to originate with the big tech companies themselves and I highly doubt they really have the interests of small business in mind. Plus, I'm old enough to remember when everyone claimed EU tech law was about to ban memes, which didn't happen...
miohtama 11 hours ago [-]
Here
>
The main burden falls on SMEs, which experienced an average decline in profits of 8.5 percent. In the IT sector, profits of small firms fell by 12.5 percent on average. Large firms, too, are affected, with profits declining by 7.9 percent on average. Curiously, large firms in the IT sector saw the smallest decline in profits, of “only” 4.6 percent. Specifically, the authors find “no significant impacts on large tech companies, like Facebook, Apple and Google, on either profits or sales,” putting to bed the myth that U.S. technology firms are the enemy of regulation because it hits their bottom lines.
That is typically the problem with regulations. It’s usually easier for large companies to comply. It’s called regulatory caption.
_joel 11 hours ago [-]
> It’s called regulatory caption.
Regulatory Capture, no?
bigfudge 10 hours ago [-]
that's right, although that isn't quite the same concept. Regulatory capture implies the large companies have helped draft the regulations to their own advantage (and SME's disadvantage in this case).
raron 3 hours ago [-]
The ever biggest GDPR fine was against Facebook, and it was less than 0.3% of their revenue. That is just a let us ignore GDPR tax. I don't know about small businesses, but big tech from US is fine.
> I'm old enough to remember when everyone claimed EU tech law was about to ban memes, which didn't happen...
AFAIK those parts of that law was changed somewhat
giancarlostoro 10 hours ago [-]
We saw a bunch of small side project type of sites from the EU close down all over HN after GDPR became a thing. The risk for someone small is too high. The minimum fines are in the millions.
jimmaswell 10 hours ago [-]
Something has gone horribly wrong with your governance when you can 1. get fined a million euro under GDPR and 2. arrested for hate crimes, for 1. hosting a default Apache server with logs and 2. putting a joke video of your dog doing a "Hitler salute" on it.
giancarlostoro 9 hours ago [-]
Hey, Count Dankula is funny, maybe its not for everyone, but he really should not have been arrested for what his dog did. His youtube has really fascinating content on it.
cccbbbaaa 9 hours ago [-]
No, the minimum fines are in the hundreds, and that’s on the unlikely event where you actually get a fine. Fines over a million are definitely not the norm. See GDPR article 83 and https://www.enforcementtracker.com/
guappa 12 hours ago [-]
> EU small software companies pay estimated extra 400 EUR/year
[citation needed]
tw04 12 hours ago [-]
> The problem is that the GDPR has been largely a failure protecting citizens from corporations, but it has hurt everyone else.
This is just laughably incorrect. Literally every Fortune 500 that I work with who has operations in Europe has an entire team that owns GDPR compliance. It is one of the most successful projects to curtail businesses treating private data like poker chips since HIPAA.
raron 3 hours ago [-]
It would really hard to believe that Google and Facebook do comply with the (spirit of the) GDPR and deletes all personal data when it is no longer necessary. That would simply go against their business model.
Anyways, GDPR doesn't protect your data, it just specifies how companies can use it. So all my name, address, phone number, etc. will still be stored by every webshop for 10 years or so just waiting to be breached (because some tax laws).
mavhc 11 hours ago [-]
Is their job to reduce private data to the minimum needed, or the maximum allowed?
raron 4 hours ago [-]
Probably to find loopholes and questionable interpretations.
7 hours ago [-]
red_phone 10 hours ago [-]
Would you consider GDPR a failure if businesses collected the maximum allowed under the law?
47282847 7 hours ago [-]
A requirement to minimize data collection is part of it.
fxtentacle 10 hours ago [-]
"Nothing has changed in Facebook and Google data collection practices"
Facebook and Google got sued, paid fines, and changed their behavior. I can do an easy export of all of my FB and G data, thanks to the GDPR.
"EU small software companies pay estimated extra 400 EUR/year to satisfy GDPR compliance"
WTF? no! I work with several small companies and it's super easy to just NOT store anyone's birthday (why would you need that for e-commerce?) and to anonymize IPs (Google provides a plugin for GA). And, basically, that's it. Right now, I can't even find an example of how the GDPR has created any costs. It's more like people changed their behavior and procedures once GDPR was announced and that's "good enough" to comply.
Ylpertnodi 13 hours ago [-]
>...GDPR seems to be correct in a way, both are quite vague and wide.
How is the gdpr vague?
mimsee 13 hours ago [-]
Are IP addresses considered PII or not? I remember there being multiple conflicting conclusions on that
sdefresne 13 hours ago [-]
It looks like IP addresses are considered PII by GDPR:
So in essence, it disallows logging IP address for any purpose, be it security, debugging, rate-limiting etc. because you can't give consent in advance for this, and no other sentence in Art. 6.1 applies.
Moreover, to reason about this, one also needs to take into account Art 6.2 which means there might be an additional 27 laws you need to find and understand.
Note, however, that recital 30 which you quoted is explicitly NOT referenced by Art. 6, at least according to this inofficial site: https://gdpr-info.eu/art-6-gdpr/
This particular case might be solved through hashing, but then there are only 4.2bn IPs so easy to try out all hashes. Or maybe it's only OK with IPv6?
I find this vague or at least hard to reconcile with technical everyday reality, and doing it well can take enormous amounts of time and money that are not spent on advancing anything of value.
martin_a 12 hours ago [-]
That's not true. IP addresses might be processed in regards to article 6.1 c) or 6.1 f) but only for these very narrowly defined use cases and in accordance with article 5. So, purge your logs after 14/30 days and don't use the ip address for anything else and you will be fine.
cccbbbaaa 11 hours ago [-]
> So in essence, it disallows logging IP address for any purpose, be it security, debugging, rate-limiting etc. because you can't give consent in advance for this, and no other sentence in Art. 6.1 applies.
In addition to the other answers, I want to point out that recital 49 says that it is possible under legitimate interest (6(1)f).
troupo 12 hours ago [-]
> So in essence, it disallows logging IP address for any purpose, be it security, debugging, rate-limiting etc. because you can't give consent in advance for this, and no other sentence in Art. 6.1 applies.
No, it doesn't. Subsections b, c, and f roughly cover this. On top of that, no one is going to come at you with fines for doing regular business things as long as you don't store this data indefinitely long, sell it to third parties, or use it for tracking. As laid out in Article 1.1.
On top of that, for many businesses existing laws override GDPR. E.g. banks have to keep personal records around for many years.
frantzmiccoli 13 hours ago [-]
GDPR is clear-ish indeed.
That being said: it is extremely strict, a lot of lawyers like to make it stricter (because for them it means safer) and a lot of lawyers have to back of under business constraint (that push to sometimes got below legal requirements). My experience is that no two companies have the same understanding of GDPR.
riedel 3 hours ago [-]
My take is that nobody actually practices law in this area at this time. Tons of stuff will again need to go to court before you can be sure if these regulations actually apply to you. Actually tons of the cases that are relevant for smaller enterprises will actually never go to court like with GDPR leaving uncertainty for years. Having said this the good thing about the AI act is that it force injects some principles for evaluation into existing standards.
Disclaimer: i am advising a company that sells AI act related compliance tooling
14 hours ago [-]
DoingIsLearning 2 days ago [-]
I am not expert but there seems to be an overlap in the article between 'AI' and well ... just software, or signal processing:
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
- AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
- AI that uses biometrics to infer a person’s characteristics
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
All of the above can be achieved with just software, statistics, old ML techniques, i.e. 'non hype' AI kind of software.
I am not familiar with the detail of the EU AI pact but it seems like the article is simplifying important details.
I assume the ban is on the purpose/usage rather than whatever technology is used under the hood, right?
spacemanspiff01 1 days ago [-]
From the laws text:
For the purposes of this Regulation, the following definitions apply:
(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; Related: Recital 12
https://artificialintelligenceact.eu/article/3/https://artificialintelligenceact.eu/recital/12/
So, it seems like yes, software, if it is non-deterministic enough would qualify. My impression is that software that simply takes "if your income is below this threshold, we deny you a credit card." Would be fine, but somewhere along the line when your decision tree grows large enough, it probably changes.
btown 1 days ago [-]
Notably, Recital 12 says the definition "should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations."
So your hip new AI startup that's actually just hand-written regexes under the hood is likely safe for now!
(Not a lawyer, this is neither legal advice nor startup advice.)
anothernewdude 20 hours ago [-]
> Notably, Recital 12 says the definition "should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations."
That's every AI system. It follows the rules defined solely by the programmers (who I suppose might sometimes stretch the definition of natural persons) who made pytorch or whatever framework.
Muromec 14 hours ago [-]
If the thinking machine rejects my mortgage application, it should be possible to point out which exact rule triggered the rejection. With rules explicitly set by an operator it's possible. It's also possible to say that the rules in place comply with the law and stay compliant during the operation, for example it doesn't unintentionally guess that I'm having another citizenship based on my surname or postal code.
tzs 12 hours ago [-]
If the mortgage application evaluation system is deterministic so that the same input always produces the same output then it is easy to answer "Why was my application rejected?".
Just rerun the application with higher income until you get a pass. Then tell the person their application was rejected because income was not at least whatever that passing income amount was.
Maybe also vary some other inputs to see if it is possible to get a pass without raising income as much, and add to the explanation that they could lower the income needed by say getting a higher credit score or lowering your outstanding debt or not changing jobs as often or whatever.
mjburgess 12 hours ago [-]
That tells you how sensitive the model is along its decision boundary to permutations in the input -- but it isnt a relevant kind of reason why the application was rejected, since this decision boundary wasnt crafted by any person. We're here looking for programs which express prior normative reasoning (eg., "you should get a loan if...") -- whereas this decision boundary expresses no prior normative reason.
It is simply that, eg., "on some historical dataset, this boundary most relibaly predicted default" -- but this confers no normative reason to accept or reject any individual application (cf. the ecological fallacy). And so, in a very literal way, there is no normative reason the operator of this model has in accepting/rejecting any individual application.
Muromec 4 hours ago [-]
>Just rerun the application with higher income until you get a pass. Then tell the person their application was rejected because income was not at least whatever that passing income amount was.
Why do you need an AI if what you are doing is "if X < N" ?
tzs 2 hours ago [-]
It would not be just an "if X < N". Those decisions are going to depend on a lot of variables besides income such as credit history, assets, employment history, debts, and more.
For someone with a great credit history, lots of assets, a long term job in a stable position, and low debt they might be approved with a lower income than someone with a poor credit history whose income comes from a job in a volatile field.
There might be some absolute requirements, such as the person have a certain minimum income independent of all those other factors, and they they have a certain minimum credit score, and so on. If the application is rejected because it doesn't meet one of those then sure, you can just do a simple check and report that.
But most applications will be above the absolute minimums in all parameters and the rejection is because some more complicated function of all the criteria didn't meet the requirements.
But you can't just tell the person "We put all your numbers into this black box and it said 'no'. You have to give them specific reasons their application was rejected.
Muromec 2 hours ago [-]
Doesnt all this contradict what I initially replyed to?
xvokcarts 8 hours ago [-]
> If the mortgage application evaluation system is deterministic so that the same input always produces the same output then it is easy to answer "Why was my application rejected?".
But banks, at least in my country (central EU), don't have to explain their reasons for rejecting a mortgage application. So why would their automated systems have to?
Muromec 4 hours ago [-]
They don't have to explain to the applicant. They do have to explain to the regulator how exactly they stay compliant with the law.
There is so called three line system -- operational line does the actual thing (approves or rejects the mortgage), the second line gives the operational line the manual to do so the right way, the internal audit should keep an eye that whatever the first line is doing is actually what the policy says they should be doing.
It's entirely plausable that operational line is an actual LLM which is trained on a policy that the compliance department drafted and the audit department occasionally checks the outputs of the modal against the policy.
But at this point it's much easier to use LLM to write deterministic function in your favorite lisp based on the policy and run that to make decisions.
tzs 6 hours ago [-]
In the US they do have to explain. It's a requirement of the Equal Credit Opportunity Act of 1974 [1]. Here is an article with more detail on what is required in the explanation [2].
In strict mathematical reading, maybe - depends on how you define "rules", "defined" and "solely" :P. Fortunately, legal language is more straightforward like than that.
The obvious straightforward read is along the lines of: imagine you make some software, which then does something bad, and you end up in court defending yourself with an argument along the lines of, "I didn't explicitly make it do it, this behavior was a possible outcome (i.e. not a bug) but wasn't something we intended or could've reasonably predicted" -- if that argument has a chance of holding water, then the system in question does not fall under the exception your quoted.
The overall point seems to be to make sure systems that can cause harm always have humans that can be held accountable. Software where it's possible to trace the bad outcome back to specific decisions made by specific people who should've known better is OK. Software that's adaptive to the point it can do harm "on its own" and leaves no one but "the system" to blame is not allowed in those applications.
freeone3000 20 hours ago [-]
It means a decision tree where every decision node is entered by humans is not an AI system, but an unsupervised random forest is. It’s not difficult to see the correct interpretation.
hotstickyballs 22 hours ago [-]
It doesn't seem to be clear to me whether auto-formatted code (or even generated code from copilot for example) would be classified as AI.
philipov 20 hours ago [-]
It seems to me the key phrase in that definition is "that may exhibit adaptiveness after deployment" - If your code doesn't change its own operation without needing to be redeployed, it's not AI under this definition. If adaptation requires deployment, such as pushing a new version, that's not AI.
maxrmk 20 hours ago [-]
I'm not sure what they intended this to apply to. LLM based systems don't change their own operation (at least, not more so than anything with a database).
We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
yorwba 17 hours ago [-]
For LLMs we have "for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
close04 16 hours ago [-]
For either option you can trace the intention of the definitions to "was it a human coding the decision or not". Did a human decide the branches of the literal or figurative "if"?
The distinction is accountability. Determining whether a human decided the outcome, or it was decided by an obscure black box where data is algebraically twisted and turned in a way no human can fully predict today.
Legally that accountability makes all the difference. It's why companies scurry to use AI for all the crap they want to wash their hands of. "Unacceptable risk AI" will probably simply mean "AI where no human accepted the risk", and with it the legal repercussions for the AI's output.
bigfudge 9 hours ago [-]
This would be an excellent outcome (and probably the one intended).
bigfudge 9 hours ago [-]
> We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
In reality, we will wait until someone violates the obvious spirit of this so egregiously and ignore multiple warnings to that end and wind up in court (a la the GDPR suits).
This seems pretty clear.
philipov 19 hours ago [-]
It's as if the person who wrote it, their entire understanding of AI is based solely on depictions from science fiction.
ahoka 16 hours ago [-]
I'm pretty sure that was not the case.
nkmnz 14 hours ago [-]
I've personally reviewed all 7b parameters of my model and they won't adapt after deployment :)
Muromec 14 hours ago [-]
That means you can answer the question whether they comply with the relevant law in the necessary jurisdiction and can prove that to the regulator. That should be easy, right? If it's not, maybe it's better to use two regexps instead.
nkmnz 4 hours ago [-]
The model says yes.
Muromec 4 hours ago [-]
Off to the model jail it goes.
zelphirkalt 13 hours ago [-]
I understand that phrase to have the opposite meaning: Something _can_ adapt its behavior after deployment and still be considered AI under the definition. Of course this aspect is well known as online learning in machine learning terminology.
oneeyedpigeon 16 hours ago [-]
It's unclear whether "may" means "can (and does)" or whether it renders that entire clause optional.
raziel2p 21 hours ago [-]
unless your AI generated code gets deployed into production without any human supervision/approval, probably not.
hnbad 10 hours ago [-]
No offense but this is a good demonstration of a common mistake tech people (especially those used to common law systems like the US) engage in when looking at laws (especially in civil law systems like much of the rest of the world): you're thinking of technicalities, not intent.
If you use Copilot to generate code by essentially just letting it autocomplete the entire code base with little supervision, yeah, sure, that might maybe fall under this law somehow.
If you use Copilot like you would use autocomplete, i.e. by letting it fill in some sections but making step-by-step decisions about whether the code reflects your intent or not, it's not functionally different from having written that code by hand as far as this law is concerned.
But looking at these two options, nobody actually does the first one and then just leaves it at that. Letting an LLM generate code and then shipping it without having a human first reason about and verify it is not by itself a useful or complete process. It's far more likely this is just a part of a process that uses acceptance tests to verify the code and then feeds the results back into the system to generate new code and so on. But if you include this context, it's pretty obvious that this indeed would describe an "AI system" and the fact there's generated code involved is just a red herring.
So no, your gotcha doesn't work. You didn't find a loophole (or anti-loophole?) that brings down the entire legal system.
abdullahkhalids 1 days ago [-]
Seems very reasonable. Not all software has the same risk profile, and autonomous+adaptive software certainly have a more dangerous profile than simpler software, and should be regulated differently.
johndhi 1 days ago [-]
What? Why? Shouldn't those same use cases all be banned regardless of what tech is used to build them?
abdullahkhalids 23 hours ago [-]
Using a machine, instrument or technology for some intended outcome will nevertheless have a distribution of outcomes. Some good some bad. A kitchen knife will usually cut food, but will occasionally cut your finger. If maliciously used the bad outcomes become a lot more.
Two different machines can be designed for the same use case, but the possible bad outcomes in either "correct" use or malicious use of the two machines can be very different. So it is reasonable to ban the one which has unacceptable bad outcomes.
For example, while both a bicycle and a dirt bike are mobility vehicles, a park may allow one and ban the other.
ImageXav 16 hours ago [-]
Not necessarily. Interpretability of a system used to make decisions is more important in some contexts than others. For example, a black box AI used to make judiciary decisions would completely remove transparency from a system that requires careful oversight. It seems to me that the intent of the legislation is to avoid such cases from popping up, so that people can contest decisions made that would have a material impact on them, and that organisations can provide traceable reasoning.
kenjackson 11 hours ago [-]
Is a black box AI system less transparent than 12 jurors? It would seem anytime the system is human judgement, an AI system would be as transparent (or nearly so).
It would seem accountable would only be higher in systems where humans were not part of the decision making process.
cabalamat 17 hours ago [-]
> and that may exhibit adaptiveness after deployment
So if an AI can't change its weights after deployment, it's not really an AI? That doesn't make sense.
As for the other criteria, they're so vague I think a thermostat might apply.
stubish 13 hours ago [-]
Keyword 'may'.
A learning thermostat would apply, say one that uses historical records to predict changes in temperature and preemptively adjusts. And it would be low risk and unregulated in most cases. But attach to a self-heating crib or premature baby incubator and that would jump to high risk and you might have to prove it is safe.
butlike 9 hours ago [-]
So if the thermostat jumps to 105 during the night, that's not considered 'high-risk?'
stubish 2 hours ago [-]
Maybe you are right and it is still risky for sleeping adults. In any case, even high risk the standard that needs to be followed might be as simple as 'must have a physical cutoff at 30C'.
logifail 16 hours ago [-]
> they're so vague I think a thermostat might apply
Quite.
One wonders if the people who came up with this have any actual understanding of the technology they're attempting to regulate.
zelphirkalt 13 hours ago [-]
It _may_ exhibit adaptiveness after deployment, which would not change it being AI. I think that is the right reading of the definition.
sofixa 16 hours ago [-]
> As for the other criteria, they're so vague I think a thermostat might apply.
As long as the thermostat doesn't control people's lives, that's fine.
zelphirkalt 14 hours ago [-]
Hm, not a too bad definition. Seems like written by some people who know what machine learning is.
surfingdino 15 hours ago [-]
Good. You cannot have a functioning society where decisions are made in a non-deterministic way. Especially when those decision deviate from agreed protocols (laws, bylaws, contracts, etc.).
gamedever 1 days ago [-]
So no more predicting the weather with sensors?
lubsch 1 days ago [-]
Which of the criteria of the risk levels that will be regulated which are listed in the article do you think would include weather predictions?
_bin_ 23 hours ago [-]
something like a long-term weather forecast could lead to declining to issue an insurance policy for someone's home or automobile in an area. it could significantly impact the price. make it publicly available, and an insurance company could use that prediction. clearly an unacceptable risk
jeltz 1 days ago [-]
Do you use biometric data to predict the weather?
throwaway743 1 days ago [-]
I trained a CV model to tell me the temp when I stand in front of the mirror based on if my nipples are hard or not, and how hard or soft they are.
moi2388 19 hours ago [-]
Ah, it detects if it’s a bit nippy? ;)
dingnuts 1 days ago [-]
if you set up your thermostat to respond to the output from your model you could get it to turn up the temperature by pinching your nipples
spunker540 21 hours ago [-]
Too bad the EU will never experience such innovation
scottcha 23 hours ago [-]
I have a project which uses weather info to predict avalanche risk. Reading the articles its hard for me to understand whether this would apply or not but my feeling is it might (If I ever need to run this in the EU I would talk to a lawyer).
https://openavalancheproject.org
Filligree 12 hours ago [-]
As it should; people would use it to make life-or-death decisions, so that application is high-risk.
We already have ways to predict avalanche risk that are well understood and explainable. There should be a high threshold on replacing that.
uniqueuid 1 days ago [-]
Unfortunately yes, the article is a simplification, in part because the AI act delegates some regulation to existing other acts. So to know the full picture of AI regulation one needs to look at the combination of multiple texts.
The precise language on high risk is here [1], but some enumerations are placed in the annex, which (!!!) can be amended by the commission, if I am not completely mistaken. So this is very much a dynamic regulation.
Is the regulation itself AI, due to bein adaptive after deployment?
Just joking, but I think it is a funny parallel. Also because of it being probably solely human made rules.
dathinab 21 hours ago [-]
> just software, statistics, old ML techniques
yes, and with the same problems if applied to the same use cases in the same way
in turn they get regulated, too
it would be strange to limited a law to some specific technical implementation, this isn't some let's fight the hype regulation but a serious long term effort to regulate automatized decision making and classification processes which pose a increased or high risk to society
impossiblefork 1 days ago [-]
I wouldn't be surprised if it does cover all software. After all, chess solvers are AI.
oneeyedpigeon 16 hours ago [-]
Chess solvers are more AI than 90% of the things currently being touted as AI!
Muromec 14 hours ago [-]
that's what DORA the explora of your unit tests Act is
teekert 1 days ago [-]
Have been having a lot of laughs about all the things we call AI nowadays. Now it’s becoming less funny.
To me it’s just generative AI, LLMs, media generation. But I see the CNN folks suddenly getting “AI” attention. Anything deep learning really. It’s pretty weird. Even our old batch processing, SLURM based clusters with GPU nodes are now “AI Factories”.
sethd 1 days ago [-]
Even the A* search algorithm is technically AI.
ykonstant 1 days ago [-]
Oh man, I really want to watch CNN folks try to pronounce Dijkstra!
jcgrillo 1 days ago [-]
We could have it both ways with a Convolutional News Network
Well, it used to be. But whenever we understand something, we move the goal posts of what AI is.
At least that's what we used to do.
aithrowawaycomm 1 days ago [-]
It's not "moving the goalposts." It's realizing that the principles behind perceptrons / Lisp expert systems / AlphaGo / LLMs / etc might be very useful and interesting from a software perspective, but they have nothing to do with "intelligence," and they aren't a viable path for making machines which can actually think in the same way a chimpanzee can think. At best they do a shallow imitation of certain types of formal human thinking. So the search continues.
eru 24 hours ago [-]
No, it's still moving the goalposts. It just that we move the goalposts for pretty good reasons. (I agree!)
Btw, you bring up the perspective of realising that our tools weren't adequate. But it's broader: completely ignoring the tools, we also realise that eg being able to play eg chess really, really well didn't actually capture what we wanted to mean by 'intelligence'. Similar for other outcomes.
bmicraft 24 hours ago [-]
Moving the goal posts and noticing that that you mistook the street lights for goal posts is not really the same.
xdennis 13 hours ago [-]
> To me it’s just generative AI, LLMs, media generation.
That's not what AI is.
Artificial Intelligence has decades of use in academia. Even a script which plays Tic Tac Toe is AI. LLMs have advanced the field profoundly and gained widespread use. But that doesn't mean that a Tic Tac Toe bot is no longer AI.
When a term passes to the mainstream people manufacture their own idea of what it means. This has happened to the term "hacker". But that doesn't mean decades of AI papers are wrong because the public uses a different definition.
It's similar to the professional vs the public understanding of the term "prop" in movie making. People were criticizing Alec Baldwin for using a real gun on the set of Rust instead of a "prop" gun. But as movie professionals explained, a real gun is a prop gun. Prop in theater/movies just means property. It's anything that's used in the production. Prop guns can be plastic replicas, real guns which have been disabled, or actually firing guns. Just because the public thinks "prop" means "fake", doesn't mean movie makers have to change their terms.
pmontra 19 hours ago [-]
As somebody told me recently, now AI means any program that does something that people think is AI, even if programs doing that thing have been with us for ten years or more with the same degree of accuracy.
I've worked with the bureaucrats in Brussels on tech/privacy topics.
Their deep meaning is "we don't want machines to make decisions". A key point for them has always been "explainability".
GDPR has a provision about "profiling" and "automated decision making" for key aspects of life. E.g. if you ask for a mortgage (pretty important life changing/affecting decision) and the bank rejects it you a) can ask them "why" and they MUST explain, in writing, and b) if the decision was made in a system that was fed your data (demographic & financial) you can request that a Human to repeat the 'calculations'.
Good luck having ChatGTP explaining.
They are trying to avoid having the dystopian nightmare of the (apologies - I don't mean to disrespect the dead, I mean to disrespect the industry) Insurance & Healthcare in the US, where a system gets to decide 'your claim is denied' against humans' (doctors in this case)(sometimes imperfect) consultations because one parameter writes "make X amount of profit above all else" (perhaps not coded with this precise parameter but somehow else).
Now, understanding the (personal) data collection and send to companies in the US (or other countries) that don't fall under the Adequacy Decisions [0] and combining that with the aforementioned (decision-making) risks, using LLMs in Production is 'very risky'.
Using Copilot for writing code is very much different because there the control of "converting the code to binaries, and moving said binaries to Prod env." (they used to call them Librarians back in the day...), so Human Intervention is required to do code review, code test, etc (just in case SkyNet wrote code to export the data 'back home' to OpenAI, xAI, or any other AI company it came from).
I haven't read the regulation lately/in its final text (I contributed and commented some when it was still being drafted), and/but I remember the discussions on the matter.
EDIT: ultimately we want humans to have the final word, not machines.
nobodywillobsrv 18 hours ago [-]
The EU and other organizations will be using these to ban data collection and anything to do with protection of the EU.
They will interpret "predict" as merely "report" or "act on".
This is terrible.
theptip 1 days ago [-]
Seems like a mostly reasonable list of things to not let AI do without better safety evals.
> AI that tries to infer people’s emotions at work or school
I wonder how broadly this will be construed. For example if an agent uses CoT and they needs emotional state as part of that, can it be used in a work or school setting at all?
layer8 1 days ago [-]
This quote is inaccurate. The actual wording is: "the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;" and it links to https://artificialintelligenceact.eu/recital/44/ for rationale.
So, this targets the use case of a third party using AI to detect the emotional state of a person.
unification_fan 1 days ago [-]
We need to profile your every thought and emotion. Don't worry though, it's for medical or safety reasons only. Same for your internet history... You know, terrorism and all. Can't have that.
solatic 15 hours ago [-]
Unable to unagree with doubleplusgood reasons
nielsole 1 days ago [-]
I can definitely see use cases where cameras that detect distressed people can help prevent harm from them and others.
nonchalantsui 24 hours ago [-]
Stop resisting! We detect you are depressed and will take action!
dmix 1 days ago [-]
Is this just based on hypothetical scenario they sat in a room coming up with or has such a thing been tried and harmed people?
jorisboris 16 hours ago [-]
I once tried building a Macbook app which lowers my music volume when I'm not smiling, to try to stimulate more happy emotions (forcing yourself to smile induces happy feelings)
Then I started thinking how this could be used in restaurants to see if waiters smile to the people they are serving
Or in customer service (you can actually hear it when people smile on the phone)
Then I realised that this kind of tech would definitely lead to abuse
(btw that's not the reason I didn't build it, it was just not that easy to build)
zelphirkalt 14 hours ago [-]
We cannot extend your work contract, based on your performance the last week. You definitely did not smile enough. Our systems indicate, that you were also not happy enough a sufficient portion of the time you worked.
This is an interesting idea (the first one..) and I suspect it'd be quite easy to build now. Someone posted a web app recently that reminds you to blink (iff you don't), and it was incredibly accurate for me, running fully in the browser.
The EU generally (so far) has passed reasonable legislation about these things. I'd be surprised if it was taken more broadly than the point where a reasonable person would feel comfortable with it.
__MatrixMan__ 11 hours ago [-]
A difficulty I have with customer service folk is that usually I'm just trying to report a bug. I'm not upset. Please stop trying to give me coupons. I'm not trying to cancel my account I just want to help your engineers fix this bug (and later on, I want to see that it has actually gone away).
If I must intact with an AI for this, I'd prefer that it infer my emotions correctly.
Zenst 1 days ago [-]
I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front. Maybe that will get challenged as disability discrimination by some Autistic group. Which would be interesting. As with most things, there are rules, and exceptions to those rules - no shoe fits everyone, though forcing people to wear the wrong shoe size, can do more harm than good.
danielheath 1 days ago [-]
> I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front.
It might well be a useful tool to point at yourself.
It's an entirely inappropriate one to point at someone else. If you can't imagine having someone estimate your emotional state (usually incorrectly), and use that as a basis to disregard your opinion, you've lived a very different life to mine. Don't let them hide behind "the AI agreed with my assessment".
cwillu 1 days ago [-]
On the other hand, as someone who's emotional state is routinely incorrectly assessed by people, I can't imagine a worse hell than having that misassessment codified into an ai that I am required to interact with.
Mordisquitos 13 hours ago [-]
> I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front.
The regulation explicitly provides an exception for medical reasons:
Article 5:
1. The following AI practices shall be prohibited:
[...]
(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
pjc50 10 hours ago [-]
I can definitely find you autistic people who would hate having such a device pointed at them, because they don't mask the ""correct"" emotional state well enough.
1 days ago [-]
sofixa 16 hours ago [-]
> Seems like a mostly reasonable list of things to not let AI do without better safety evals.
Yes. This is how you know that all the people screaming about the EU overregulating and how the EU will miss all that AI innovation haven't even bothered to Google or ask their preferred LLM about the legislation. It's mostly just common sense to avoid EU citizens having their rights or lives decided by blackbox algorithms nobody can explain, be it in a Post Office (UK) scandal style, or US healthcare style.
hcfman 1 days ago [-]
Laws that are open to interpretation with drastic consequences if it's interpreted against your favour pose unacceptable risk to business investors and stifle innovation.
kakadu 15 hours ago [-]
Me and my euro mates are not interested in this kid of "innovation".
The "business investors" and "innovators" can take this kind of business elsewhere.
This kind of talk where regulators are assaulted by free marketeers and freedom fighters is unacceptable here.
Let us not misinterpret business people as "innovators", if what they do is not net positive for the society, they do not belong here.
cmenge 12 hours ago [-]
I'm not sure where "here" is and who you think you speak for, but as a European, I am strictly against regulation, in particular vague regulation made by non-elected EU bureaucrats. And no, freedom of speech and a discussion about the pros and cons is also not "unacceptable". It is part of the democratic process.
phtrivier 11 hours ago [-]
The AI act was voted by very-much-elected members of the EU parliament, in 2023, with a large consensus: 523 votes for, 46 against, 49 abstentions. [1]
You seem to very strict about the kind of political discourse that you would allow.
And I'm going to even elaborate on how problematic is your "net positive or society" and who would possibly be in charge of determining that.
BrenBarn 21 hours ago [-]
A heck of a lot of what passes for "innovation" these days is stuff I absolutely want to stifle.
jeffdotdev 1 days ago [-]
There is no law that isn't open to interpretation. There is a reason for the judicial branch of government.
caseyy 18 hours ago [-]
Well, the laws in civil law countries that practice legal literalism are not open to interpretation. Eastern Europe, much of which is a part of the EU, is quite literalist.
The understanding is that interpreting laws leads to bias, partiality, and injustice; while following the letter of the law equally in each situation is the most just approach.
saagarjha 15 hours ago [-]
Interpreting the law literally is very easy to do in a biased way: you just pick and choose when to do it.
oneeyedpigeon 16 hours ago [-]
I don't believe that's even possible — I'd love to see an example. How do you define anything 100% literally, 100% unambiguously? You'd have to include the entire language in your definition for a start, and keep that constantly updated.
caseyy 11 hours ago [-]
Lithuanian laws are a good example. They are extremely verbose compared to most common law countries.
I lived in Lithuania for a while and at the time, there was a big national debate about how “family” should be defined in laws — what people it can and can’t include.
So yes — a lot of emphasis is put on verbose definitions in literalist legal texts. And very very verbose explanations of many edge cases, too,
I know first hand it will be very hard to read Lithuanian legal texts for someone who is not a native speaker of the language, and even for natives it’s a challenge. So you could instead google “literalist legal systems”, and I believe you’ll find at least some examples/more context in English somewhere.
dns_snek 1 days ago [-]
People said that about GDPR. Laws that don't leave any room for interpretation are bound to have loopholes that pose unacceptable risk to the population.
daedrdev 1 days ago [-]
I think its quite clear gdpr has indeed lead to lower investment and delayed or cancelled products in Europe
dns_snek 19 hours ago [-]
Can you share any data?
It's also quite clear that places without strong privacy protections like the US are developing into dystopian hellscapes.
Spooky23 1 days ago [-]
Speed isn’t always ideal. My favorite example that getting dated in hotel WiFi.
Early adopters signed contracts with companies that provided shitty WiFi at high prices for a long time. A $500 hotel would have $30/night connections that were slow, while the Courtyard Marriott had it for free.
jeffgreco 1 days ago [-]
Yet better privacy protections than we in the States enjoy.
DocTomoe 1 days ago [-]
The EU in a nutshell:
You can't have nice things, but on the bright side Google/Apple/Facebook won't know what you had for dinner.
Now give us your whole financial transaction and travel history, so we can share it with the US, a hostile country, citizen!
theyinwhy 15 hours ago [-]
Genuinely curious: What are those nice things Europe cannot have?
profeatur 11 hours ago [-]
Well, most things, considering the absolutely pathetic average salary, the rising cost of living, and the ever-increasing tax burden.
naabb 9 hours ago [-]
Most things? Like vacation days? Like healthcare? Like the safety of not being murdered by a maniac with a gun? I'll take my lower salary thank you very much.
Nevermind the fact that you obviously come from a previliged position if you think that money is all that's important. You're blinded.
theyinwhy 10 hours ago [-]
The US average is absolutely distorted. You should definitely compare the median. In terms of cost of living I very much doubt US is less expensive, especially when including health care in the equation. Additionally, there is a huge difference depending which european countries you are comparing to.
DocTomoe 15 hours ago [-]
For starters, I am still waiting for Apple Intelligence to arrive on my phone. Reason given is "EU legislative concerns".
Then there's the nontrivial number of especially local US news sources which now give me a cheerful "451 Unavailable For Legal Reasons" error code.
Then there's the outright stupid stuff - like lightbulbs that do not cost 15 euros a piece (to save 'energy'), or drinking straws that do not dissolve in my coke within the first minute (to avoid 'disposable plastics'). There are hundreds of examples like that.
The EU is a regulation juggernaut, and is making the world an actively worse place for everyone globally. See "Cookie Banners".
fredoliveira 14 hours ago [-]
> For starters, I am still waiting for Apple Intelligence to arrive on my phone. Reason given is "EU legislative concerns".
So the EU should not control where your data is processed? You can't claim in one comment to be bummed about data exchanges between the EU and the US (which you do), and then not understand why there are regulations in place that are slowing down the roll-out of things like Apple Intelligence, for your benefit.
DocTomoe 9 hours ago [-]
There is a qualitative difference between
1. I am giving my data freely and because of my own decision to an organization I trust and
2. The state is taking my data by force of law to share it with an inherently untrustworthy organization.
chgs 15 hours ago [-]
Those are corporations causing the problems, not the eu.
I understood he was referring to incandescent light bulbs, which have been largely regulated out of the market. So you now need to get an "Edison light bulb" which circumvenes regulation but costs significantly more.
naabb 9 hours ago [-]
Good, that's the point. Price out the products that are bad for the environment. They are still there if you want to contribute to the degradation of the environment
jillyboel 8 hours ago [-]
I'm confused. The US also banned incandescent lamps?
Yes, it only affects airlines that have connections to the US. But if I book Lufthansa from Frankfurt to Tokyo, the PNR will still be sent to the US, for Lufthansa has connections to the US.
Yes, there are 'safeguards' in there, to shackle the DHS to be responsible with the data - but who seriously thinks the data, once in US hands, is used responsibly and only for the matters outlined in the treaty? The US has been less of a reliable partner for decades now.
Oh, right. They won't do that for financial transactions, right? Right?
> Yes, it only affects airlines that have connections to the US. But if I book Lufthansa from Frankfurt to Tokyo, the PNR will still be sent to the US, for Lufthansa has connections to the US.
Any proof of that claim? The agreement specifically mentions flights between the EU and the US, so any departure from that (like the scenario you describe) is unlawful, according to my own understanding.
DocTomoe 8 hours ago [-]
Where do you read this only affects flights between the EU and the US?
Article 2.1 clearly states it is applicable to all EU airlines *operating* flights to or from the US. That does not mean they ONLY have to provide PNR FOR those flights
Article 3 speaks about "Data in their (the airlines) reservation systems". There's no limitation to only US-related flights.
The specific mention of flights to and from the US you are likely refer to is in the preamble, referencing a law the US set up prior.
csunbird 10 hours ago [-]
The financial transactions are also shared by the both sides, EU can also request data from US, as clearly stated in the document.
Both document clearly define the uses cases that are applicable for the data sharing, and the second document linked by you also explicitly states that US has to put same effort to provide same capabilities to EU as well.
oneeyedpigeon 16 hours ago [-]
The USA in a nutshell:
We elected a President who tried to lead an armed insurrection but we'll never press criminal charges because we elected him President again.
Sorry, but anything the EU has ever done pales in comparison with that.
linksnapzz 9 hours ago [-]
Of course. No EU president could lead an insurrection! The working group in Brussels charged with formalizing the insurrection guidelines to produce the right forms for the insurrectionists to apply for their mob permit...is currently stuck hammering out definitions between the Italian and Czech delegations.
They hope the paperwork will be complete by 2053, which will allow an EU president to, hopefully, attempt some kind of coup (if everything is filled out correctly) sometime before 2060.
vixen99 10 hours ago [-]
You're right, no competition. Let's face it, your President is a loser who can't even manage a simple insurrection; no proper planning I guess. Fortunately not much bloodshed though the defending side did manage to shoot dead one unarmed female ‘warrior’.
DocTomoe 15 hours ago [-]
As an European, respectfully, I am not too interested in comparing the stability of my system of government with that of another country. I try to compare my circumstances to a better ideal, not a worse.
shortsunblack 24 hours ago [-]
By which data is that clear? If anything, GDPR has lead to greater investment in areas that actually matter. Zero knowledge proofs, pseudonymization techniques, user friendly open-source SaaS products such as NextCloud.
1 days ago [-]
sporkydistance 1 days ago [-]
GDPR is one of the best pieces of legislation to come out of the EU this century.
It is the utter bane if "move fast and break things", and I'm so glad to have it.
I will never understand the submissive disposition of Americans to billionaires who sell them out. They are all about being rugged Cow Boys while smashing systems that foster their own well-being. It's like their pathology to be independent makes them shoot at their own feet. Utterly baffling.
Ekaros 19 hours ago [-]
It is because they want to be those oppressors. And they think this type of legislation might prevent them from being such. They want to be rich and exploit other people as much as possible with little as possible consequences.
throwawaymobule 13 hours ago [-]
You can still move fast and break things, just treat PII like explosive radioactive waste while you're doing it.
hcfman 1 days ago [-]
I would like to see a new law that puts any member of government found obstructing justice is put in jail.
Except that the person responsible for travesty of justice framing 9 innocent people in this Dutch series is currently the president of the court of Maastricht.
Remember. The courts have the say as to who wins and looses in these new vague laws. The ones running the courts have to not be corrupt. But the case above shows that this situation is in fact not the case.
_bin_ 23 hours ago [-]
surely EU courts will not unfairly penalize US-developed models...
1 days ago [-]
bmicraft 23 hours ago [-]
Yes sometimes stuff like this happens. Still, I'd like to think the EU is a prime example for how "reasonable" legislation has benefits over extremely specific legislation. Reasonable wins almost every time with how it fares under changing circumstances and how it's pretty much loop hole proof by design.
cactusplant7374 22 hours ago [-]
Is there somewhere I can read more about this?
_heimdall 2 days ago [-]
What I don't see here is how the EU is actually defining what is and is not considered AI.
> AI that manipulates a person’s decisions subliminally or deceptively.
That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Or is this limited specifically to LLMs, as OpenAI has so successfully convinced us that LLMs really are Aai and previous ML tools weren't?
dijksterhuis 2 days ago [-]
the actual text in the ~act~ guidance states:
> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm
Bjartr 2 days ago [-]
Seems like even a rudimentary ML model powering ad placements would run afoul of this.
dijksterhuis 2 days ago [-]
> In addition, common and legitimate commercial practices, for example in the field of advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful manipulative AI-enabled practices.
Broadly speaking, I feel whenever you have to build into a law specific cave outs for perfectly innocuous behavior that would otherwise be illegal under the law as written, it's not a very well thought out law.
Either the behavior in question is actually bad in which case there shouldn't be exceptions, or there's actually nothing inherently wrong with it in which case you have misidentified the actual problem and are probably needlessly criminalizing a huge swathe of normal behavior beyond just the one exception you happened to think of.
bmicraft 23 hours ago [-]
Funny, I took away pretty much the opposite. That advertising is only "acceptable" because it been here for too long, but is otherwise equally ban-worthy for all the same (reasonable) reasons.
campl3r 19 hours ago [-]
We, as society, allow legacy implementations to exist all the time. This is one of these times.
HPsquared 1 days ago [-]
It's the style of law that says "Anything not explicitly permitted, is banned."
kavalg 13 hours ago [-]
The burden will be on proving "significant harm"
anticensor 1 days ago [-]
That is by design.
blackeyeblitzar 1 days ago [-]
Even ads without ML would run afoul of this
dist-epoch 2 days ago [-]
so "sex sells" kind of ads are now illegal?
vitehozonage 1 days ago [-]
Exactly what i thought too.
Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.
It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?
Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.
troupo 2 days ago [-]
> What I don't see here is how the EU is actually defining what is and is not considered AI.
Because instead of reading the source, you're reading a sensationalist article.
> That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
----
We're going to get a repeat of GDPR aren't we? Where 8 years in people arguing about it have never read anything beyond twitter hot takes and sensationalist articles?
_heimdall 1 days ago [-]
Sure, I get that reading the act is more important than the article.
And in reading the act, I didn't see any clear definitions. They have broad references to what reads much like any ML algorithm, with carve outs for areas where manipulating or influencing is expected (like advertising).
Where in the act does it actually define the bar for a technology to be considered AI? A link or a quote would be really helpful here, I didn't see such a description but it is easy to miss in legal texts.
robertlagrant 1 days ago [-]
The briefing on the Act talks about the risk of overly broad definitions. Why don't you just engage in good faith? What's the point of all this performative "oh this is making me so tired"?
pessimizer 1 days ago [-]
> Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
You could point out a specific section or page number, instead of wasting everyone's time. The vast majority of people who have an interest in this subject do not have a strong enough interest to do what you have claim to have done.
You could have shared, right here, the knowledge that came from that reading. At least a hundred interested people who would have come across the pointing out of this clear definition within the act in your comment will now instead continue ignorantly making decisions you disagree with. Victory?
scarface_74 2 days ago [-]
Maybe if the GDPR was a simple law instead of 11 chapters and 99 sections and all anyone got as a benefit from it is cookie banners it would be different.
HeatrayEnjoyer 2 days ago [-]
GDPR doesn't benefit anyone? Is that a joke?
1 days ago [-]
troupo 1 days ago [-]
> Maybe if the GDPR was a simple law
It is a simple law. You can read it in an afternoon. If you still don't understand it 8 years later, it's not the fault of the law.
> instead of 11 chapters and 99 sections
News flash: humans and their affairs are complicated
> all anyone got as a benefit from it is cookie banners
Please show me where GDPR requires cookie banners.
Bonus points: who is responsible for the cookie banners.
Double bonus points: why HN hails Apple for implementing "ask apps not to track", boos Facebook and others for invasive tracking, ... and boos GDPR which literally tells companies not to track users
wiz21c 1 days ago [-]
> Please show me where GDPR requires cookie banners.
That's the bit everyone forget. GDPR didn't ask for cookie banners at all. It asked for consent in case consent is needed.
And most of the time consent is not needed since I just can say "no cookies" to many websites and everything is just fine.
shortsunblack 24 hours ago [-]
Consent is never "needed". Consent is one of many legal bases that allows for data processing to take place. If other legal bases than consent do not apply, the industry can use "consent" as a get out of jail card. Consent as a legal basis was heavily lobbied by Big Tech.
If even consent does not apply, then the data shall not be processed. That's the end of it.
scarface_74 1 days ago [-]
Intentions don’t matter, effects do
HPsquared 1 days ago [-]
So why does every website persist in annoying their users? Are they all (or 99%) simply stupid? I have a hard time believing that.
shortsunblack 24 hours ago [-]
It's called dark patterns and malicious compliance
.
The annoying banners in particular, were designed by IAB Tech Lab, which is an industry front for adtech/martech companies.
throwawaymobule 13 hours ago [-]
Oncehub removed tracking cookies from some of their meeting invite pages in the EU and stopped showing a banner, because they thought it looked offputting.
They got a few support tickets from people who thought they were still tracking, but just removed the banner.
gjm11 1 days ago [-]
It's (at least in some cases) malice, not stupidity.
By putting cookie banners everywhere and pretending that they are a requirement of the GDPR, the owners of the websites (or of the tracking systems attached to those websites) (1) provide an opportunity for people to say "yes" to tracking they would almost certainly actually prefer not to happen, and (2) inflict an annoyance on people and blame it on the GDPR.
The result: huge numbers of people think that the GDPR is a stupid law whose main effect is to produce unnecessary cookie banners, and argue against any other legislation that looks like it, and resent the organization responsible for it.
Which reduces the likely future amount of legislation that might get in the way of extracting the maximum in profit by spying on people and selling their personal information to advertisers.
Which is ... not a stupid thing to do, if you are in the business of spying on people and selling their personal information to advertisers.
scarface_74 21 hours ago [-]
You really think American companies are playing that level of 3D chess? I see cookie banners on corporate sites that have no ads.
Dylan16807 20 hours ago [-]
Monkey see monkey do.
watwut 20 hours ago [-]
It is about tracking, not about ads. Ads without tracking require no banners. Non tracking cookies require no banners.
Corporate sites track you and need banner. Ir is intentionally obnoxious so that you click accept all.
scarface_74 1 days ago [-]
It doesn’t matter what it requires, the point is as usual, the EU doesn’t take into account the unintended consequences of laws it passes when it comes to technology.
That partially explains the state of the tech industry in the EU.
But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
MattPalmer1086 1 days ago [-]
The EU just prioritises protection for its citizens over tech industry profits. They are also not opposed to ad revenue and tracking; only that people must consent to being tracked, no sneaky spying. I'm quite happy for tech to have those restrictions.
scarface_74 1 days ago [-]
The EU right now is telling Meta that it is illegal to give users the option of either ads based on behavior on the platform or charging a monthly subscription fee.
impossiblefork 1 days ago [-]
and it is illegal, and has been illegal for a long time.
Consent for tracking must be freely given. You can't give someone something in return for it.
HPsquared 1 days ago [-]
Free as in freedom, or free as in beer?
shortsunblack 24 hours ago [-]
Free as in free from coercion. And GDPR has clear language of 'no detriment'.
Dylan16807 20 hours ago [-]
Good.
(And they are allowed to run as many non-tracking ads as they want.)
Kbelicius 1 days ago [-]
> The EU right now is telling Meta that it is illegal to give users the option of either ads based on behavior on the platform or charging a monthly subscription fee.
And? With GDPR the EU decided that private data cannot be used as a form of payment. It can only be voluntarily given. Similarly to using ones body. You can fuck whoever you want and you can give your organs if you so choose but no business is allowed to be payed in sex or organs.
scarface_74 1 days ago [-]
That’s just the problem. Meta was going to give users a choice between paying with “private data” or paying money. The EU won’t let people make that choice are you saying people in the EU are too dumb to decide for themselves?
But how is your data that you give to Facebook “private” to you? Facebook isn’t sharing your data to others. Ad buyers tell Facebook “Put this ad in front of people between 25-30 who look at pages that are similar to $x on Facebook”
shortsunblack 24 hours ago [-]
You cannot barter with fundamental human rights, which right to data protection is (as per Charter of Fundamental Rights of the European Union), the same way you cannot barter yourself into slavery, even if you insist you are willing and consenting. By what precedent? By the precedent of the state being sovereign in enacting law.
danielscrubs 16 hours ago [-]
WeChat would exit on Android if you didn’t give your contact list to them, but this behaviour wasn’t allowed on iOS by our Apple overlords and Im quite happy about that.
Kbelicius 1 days ago [-]
> That’s just the problem. Meta was going to give users a choice between paying with “private data” or paying money.
Well, per GDPR they aren't allowed to do that. Are they giving that option to users outside of EU? Why Not?
> The EU won’t let people make that choice are you saying people in the EU are too dumb to decide for themselves?
No I do not think that. What made you think that I think that?
What about sex and organs? In your opinion should businesses be allowed to charge you with those?
> But how is your data that you give to Facebook “private” to you?
I didn't give it to them. What is so hard to understand about that?
Are you saying that your browsing data isn't private to you? Care to share it?
scarface_74 1 days ago [-]
> Well, per GDPR they aren't allowed to do that. Are they giving that option to users outside of EU? Why Not?
Because no other place thinks that their citizens are too dumb to make informed choices.
> What about sex and organs? In your opinion should businesses be allowed to charge you with those?
If consenting adults decide they want to have sex as a financial arrangement why not? Do you think these 25 year old “girlfriends” of 70 year old millionaires are there for the love?
> I didn't give it to them. What is so hard to understand about that?
When you are on Facebook’s platform and you tell them your name, interests, relationship status, check ins, and on their site, you’re not voluntarily giving them your data?
> Are you saying that your browsing data isn't private to you? Care to share it?
If I am using a service and giving that service information about me, yes I expect that service to have information about me.
Just like right now, HN knows my email address and my comment history and where I access this site from.
dbetteridge 1 days ago [-]
There's a fundamental difference I think in the European mindset on private data and the American.
From the European mindset: private data is not "given" to a company, the company is temporarily allowed to use the data while that person engages in a relationship with the company, the data remains owned by the person (think copyright and licensing of artistic works).
American companies: think that they are granted ownership of data, just because they collect it. Therefore they cannot understand or don't want to comply with things like GDPR where they must ask to collect data and even then must only use it according to the whims of the person to whom it belongs.
watwut 19 hours ago [-]
It is more that they are not dumb enough to buy into these kind of manipulative arguments.
troupo 17 hours ago [-]
> Because no other place thinks that their citizens are too dumb to make informed choices.
In case of Facebook (or tracking generally) you had no chance to make an informed choice. You are just tracked, and your data is sold to hundreds of "partners" with no possibility to say "no"
> Just like right now, HN knows my email address and my comment history and where I access this site from.
And that is fine. You'd know that if you spent about one afternoon reading through GDPR, a regulation that has been around for 8 years.
scarface_74 13 hours ago [-]
Facebook doesn’t sell your data. Why would they? Having your data is their competitive advantage. They sell access to you based on the data they have.
troupo 12 hours ago [-]
> Facebook doesn’t sell your data.
A distinction without meaning. Here's your original statement: "no other place thinks that their citizens are too dumb to make informed choices."
Questions:
At which point do you make informed choice about the data that Facebook collects on you?
At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
At which point do you make an informed choice to let Facebook use any and all data it has on you to train Facebook's AI?
Bonus questions:
At which point did Facebook actually start give users at least some information on the data they collect and letting them do an informed choice?
scarface_74 10 hours ago [-]
> At which point do you make informed choice about the data that Facebook collects on you?
You make an “informed choice” when you create a Facebook account, give Facebook your name, date of birth, your relationship status and who you are in a relationship with, your sexual orientation, when you check in to where you have been, when you click on and buy from advertisers, when you join a Facebook group, when you tell it who your friends are…
Should I go on? At each point you made an affirmative choice about giving Facebook your information.
> At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
You did see my link where Facebook stopped doing that in 2018? You notice you can’t find any newer references.
watwut 19 hours ago [-]
I love hearing pointificarions about unintended consequences from Americans and especially from Americans in tech.
No, being free to abuse others is not a positive feature. Not for tech, not for politics, not for business.
scarface_74 10 hours ago [-]
So was it intended for the EU not to have a tech industry of note?
troupo 17 hours ago [-]
> the EU doesn’t take into account the unintended consequences of laws it passes when it comes to technology.
So, the companies that implement these cookie banners are entirely without blame, right?
So what is your solution?
Reminder: GDPR is general data protection regulation. It doesn't deal with cookies at all. It deals with tracking, collecting and keeping of user data. Doesn't matter if it's on the internet, in you phone app, or in an ofline business.
Reminder: if your solution is "this should've been built into the browser", then: 1) GDPR doesn't deal with specific tech (because tech changes), 2) when governments mandates specific solutions they are called overreaching overbearing tyrants and 3) why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser even though it's been 8 years already?
> But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
In the long run most likely GDPR (and that's why Facebook is fighting EU in courts, and only fights Apple in newspaper ads), because Apple's "ask apps to not track" doesn't work. This was literally top article on HN just yesterday: "Everyone knows your location: tracking myself down through in-app ads" https://timsh.org/tracking-myself-down-through-in-app-ads/
So what is your solution to that?
scarface_74 13 hours ago [-]
Meta announced in their earnings report that ATT caused a drop in revenue after it went to effect.
They made no such announcement after the GDPR.
What’s my solution? There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address. For instance, neither Overcast or Apple’s podcast app actively track you or have a third party ad SDK [1]. But since they and every other real podcast player GET both the RSS feed and audio directly from the hosting provider, the hosting provider can do dynamic ad insertion based on your location by correlating it to your IP address.
What I personally do avoid is not use ad supported apps because I find them janky. On my computer at least, I use the ChatGPT plug in for Chrome and it’s now my default search engine. I pay for ChatGPT and the paid version has had built in search for years.
troupo 12 hours ago [-]
> They made no such announcement after the GDPR.
And yet they make no move against Apple, and they are fighting EU in courts. Hence long term.
> There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address.
Having my IP address is totally fine under GDPR.
What is not fine under my GDPR is to use this IP address (or other data) for, say, indefinite tracking.
For example, some of these completely innocent companies that were forced to show cookie banners or something, and that only want to show ads, store precise geolocation data for 10+ years.
I guess something something informed consent and server will always have IP address or something.
> What I personally do avoid is not use ad supported apps because I find them janky.
So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
scarface_74 10 hours ago [-]
> And yet they make no move against Apple, and they are fighting EU in courts. Hence long term.
What “move” could they do against Apple?
> So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
You asked me how do I avoid it? I do it by being an intelligent adult who can make my own choices
mgdev 10 hours ago [-]
The EU once again races toward premature regulation.
Europe's tech sector will continue to wither as America and others surge ahead.
You can't regulate your way to technological leadership.
qoez 9 hours ago [-]
If you read the list of unacceptable things they're actually super reasonable and kinda crazy to think american companies will be allowed to (and probably will) participate in
mgdev 5 hours ago [-]
Absent examples of concrete impact (damage), it's arbitrary fear mongering.
You can write about anything to make it sound bad, even when it's good, and vice versa.
Need to focus on outcomes.
justonceokay 10 hours ago [-]
I think in the medium-term future we will have our own real-life Butlerian Jihad against thinking machines. Maybe they won’t be outright banned, but there needs to be some conservative force that pushes back against progress for the sake of “because we can”. Is this legislation premature? Maybe. But I don’t think it will be the last or the the most comprehensive within our lifetimes.
kvgr 9 hours ago [-]
And China will have none of that and will remove us from the face of earth.
Timshel 9 hours ago [-]
Yeah let's wait until more "AI that tries to infer people’s emotions at work or school." or "AI that manipulates a person’s decisions subliminally or deceptively." are deployed and start spamming crap before making any regulation ...
mgdev 5 hours ago [-]
You're describing every ML ranking system.
Havoc 1 days ago [-]
For once that doesn’t seem overly broad. Pretty much agree with all of the list
johndhi 1 days ago [-]
The "high risk" list is where the breadth comes in
hkwerf 16 hours ago [-]
The "high risk" list, though, is essentially traditional safety functions (article 6) and functions that affect fundamental rights and access to basic services (annex III)? It's not that broad at all either.
bArray 8 hours ago [-]
> Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.
I think this is a massive oversight, for a few reasons:
1. Things will continue to be done, just elsewhere. The EU could find itself scrambling to catch-up (again) because of their own regulation.
2. Increased oversight is only part of the picture, the real challenge is that even with the oversight, proving that AI is acceptably safe, or that the risk is acceptable.
3. Some things are inherently not safe, e.g. war. I know many (almost all) military tech companies using AI, and the EU is about to become an impossible investment zone for these guys.
I think this will make investment into the EU tough, given tonnes of investment is now focused around AI. AI is and will likely remain the fuel to economic growth for quite some time, and the EU adding a time/money tax to the fuel.
mhitza 2 days ago [-]
> AI that attempts to predict people committing crimes based on their appearance.
Should have been
> AI that attempts to predict people committing crimes
hcfman 1 days ago [-]
Except government systems for the same. In the Netherlands we had the benefits affaire. A system that attempted to predict people committing benefits crime. Destroying the lives of more than 25,000 people before anyone kicked in.
Do you think they are going to fine their own initiatives out of existence? I don't think so.
However, they also have a completely extrajudicial approach to fighting organised crime. Guaranteed to be using AI approaches on the banned list. But you won't have get any freedom of information request granted investigating anything like that.
For example, any kind of investigation would often involve knowing which person filled a particular role. They won't grant such requests, claiming it involves a person, so it's personally. They won't tell you.
Let's have a few more new laws to provide the citizens please, not government slapp handles.
HeatrayEnjoyer 2 days ago [-]
Why?
mhitza 1 days ago [-]
I'd prefer if Minority Report remains a work of fiction, or at least not possible in the EU.
hcfman 1 days ago [-]
These laws will not be applied to the government
chmod775 18 hours ago [-]
Why?
throwawaymobule 12 hours ago [-]
Is there an explicit carveout? The GDPR has been used against governments.
troupo 2 days ago [-]
Instead of relying on Techcrunch and speculating, you could read sections (33), (42), and (59) of the EU AI Act yourself.
mhitza 1 days ago [-]
Article 59 seems relevant, other two on a quick skim don't seem to relate to the subject.
> 2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
shortsunblack 24 hours ago [-]
EU law largely does not regulate national security matters of the states. Though that justification is limited per international law (such as ECHR, which is the basis of many anti-government surveillance rulings by CJEU). All European Union members are part of ECHR because that is a pre-requisite for EU membership.
But ECHR is not part of EU law, especially it is not binding on the European Commission (in the context of it being a federal or seemingly federal political executive). This creates a catch-22 where member states might be violating ECHR but are mandated by EU law, though this is a very fringe consequence arising out of legal fiction and failed plans to federalize EU. Most recently, this legal fiction has become relevant in Chat Control discourse.
Great Britain and Poland have explicit opt-outs out of some European law.
hcfman 1 days ago [-]
Yep, they want to be able to continue to violate human rights and do the dirty.
> Article 59 seems relevant, other two on a quick skim don't seem to relate to the subject.
Your original take: "Should have been: AI that attempts to predict people committing crimes"
Article 42. literally:
--- start quote ---
In line with the presumption of innocence, natural persons in the Union should always be judged on their actual behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof.
Therefore, risk assessments carried out with regard to natural persons in order to assess the likelihood of their offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics should be prohibited.
In any case, that prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for example on the basis of known trafficking routes.
--- end quote ---
> Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
Key missed point: "subject to the same cumulative conditions as referred to in paragraph 1."
Where paragraph 1 is "In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: ... list of conditions ..."
-----
In before "but governments can do whatever they want". Yes, they can, and they will. Does it mean we need to stop any and all legislation and regulation because "government will do what government will do"?
I think the EU has done better following its own rules than most other countries (not that it's perfect in any way).
Does this only apply to usage, or does it include training the model as well? Training a model is extremely expensive, and it’s hard to imagine a company investing a huge amount of money to develop two different models just to comply with regulations (though maybe it’s worth it, I’m just guessing here).
I think it’s more likely that companies would adhere to EU regulations and use the same model everywhere or implement some kind of filter.
drakonka 13 hours ago [-]
Not a lawyer.
When I attended a conference about this I remember the distinction between "Provider" and "Deployer" being discussed. Providers are manufacturers developing a tool, deployers are professional users making a service available using the tool. A deployer may deploy a provided AI tool/model in a way that falls within the definition of unacceptable risk, and it is (also) the deployer's responsibility to ensure compliance.
The example given was of a university using AI for grading. The university is a deployer, and it is their responsibility to conduct a rights impact assessment before deploying the tool to its internal users.
This was compared to normal EU-style product safety regulation, which is directed at the manufacturer (what would be the provider here): if you make a stuffed toy, don’t put in such and such chemicals, etc. Here, the _application_ of the tool is under scrutiny as well vs just the tool itself. Note that this is based on very hasty notes[0] from the talk - I'm not sure to what extent the provider vs deployer responsibility divide is actually codified in the act.
AI used for social scoring (e.g., building risk profiles based on a person’s behavior) - Oh, so insurance, and credit score is banned now? And background checks.
AI that manipulates a person’s decisions subliminally or deceptively. - Oh, so no more ads?
AI that exploits vulnerabilities like age, disability, or socioeconomic status. - Oh, are we banning facebook now?
AI that attempts to predict people committing crimes based on their appearance. - pretty sure that exists somewhere too.
AI that uses biometrics to infer a person’s characteristics, like their sexual orientation. - oh, my, tiktok does not even needs biometrics, just a couple of swipes. Google actually too, just where you visit.
AI that collects “real time” biometric data in public places for the purposes of law enforcement. - but cameras everywhere are ok.
AI that tries to infer people’s emotions at work or school. - like every social network, right? or a company with toxic marketing, but without ai (hello, apple with green bubbles)
AI that creates — or expands — facial recognition databases by scraping images online or from security cameras. - oh, this also probably exists. So companies could track clients.
fredoliveira 18 hours ago [-]
I fail to see where you stand based on your line by line commentary. Are we not supposed to be against these obvious negatives? Regulation against undesired outcomes needs to start somewhere. Do you believe we should not regulate, simply because we already do some of the things that seem to fall under these individual buckets?
14 hours ago [-]
octacat 18 hours ago [-]
Laws are nice, when they work, clear and applicable.
It is probably would be as useful, as GDPR. Like of course, it sounds nice on the paper, but in reality it will get drown in a lot of legalize. Like with tracking consent in forms nowadays. Do you know which companies you gave consent and when? - me neither.
The issue with such laws, is that they are extremely wide and hard to regulate/enforce/check. But making regulation would make a few political points. While probably not so useful in real life.
We already do a lot falling under these baskets for years, big tech uses AI for algorithms left and right. "Ooopsie, we removed your youtube channel / application, because our AI system said so. You can talk to another AI system next." - we already have these, but I don't hear any reasonable feedback from EU for this.
Basically, big companies with strong legal departments would find the way around the rules. Small startups would be forced to move.
gqgs 5 hours ago [-]
>AI that manipulates a person’s decisions subliminally or deceptively.
This is a strange one. Arguably this is the objective of marketing in general.
Therefore, I'm not sure why draw the line only when AI is involved.
hnburnsy 1 days ago [-]
If AI is outlawed then only outlaws will have AI.
bmicraft 23 hours ago [-]
That take doesn't at all engage with the main point of the article: Unacceptable risk.
_bin_ 23 hours ago [-]
it doesn't actually define unacceptable risks. could be used for the development of biological, chemical, or nuclear agents? sure, an intelligent general-purpose model could be. so can wikipedia versus having to trawl through old library books. so can control-F on a PDF.
fredoliveira 18 hours ago [-]
It provides examples of unacceptable risks — they're in the article.
And the obvious whataboutism is obvious. Yes, you can find other sources for information on, say, developing bio weapons elsewhere. Does that mean you should have systems that aid you in collecting, synthesizing and displaying that information? That with the right interfaces and actuators can actually help you move towards that goal?
There's a line somewhere, that is very hard to draw, and yet should be drawn regardless.
_bin_ 9 hours ago [-]
Wikipedia collects, synthesizes, and displays such information, and provides a references section for further exploration. How the heck can anyone justifying something that does that but in a chat format?
The threshold to building any of these save nukes is extremely low, and nukes are only high because there are fewer use cases for radioactive material so it's simply less available.
biophysboy 23 hours ago [-]
You could say this about anything.
im3w1l 22 hours ago [-]
Yes, but sometimes it may not be a big deal? But with AI, it has the potential to be a very big deal.
biophysboy 22 hours ago [-]
I think it’s ok to regulate things that are a big deal.
21 hours ago [-]
Mistletoe 23 hours ago [-]
I know you are trying to reference this quote with regard to guns but I think you are actually disproving the point you are trying to make.
I'll gladly live in a country with no AI at all. Give me Dune post-Butlerian jihad levels of AI outlawing and I'll move there. I strongly believe that myself and all the people living there will be much happier.
Vecr 14 hours ago [-]
And Elon Musk moving to Mars will save him from human extinction.
21 hours ago [-]
throw310822 16 hours ago [-]
The title is a bit absurd as it's the law that defines what's acceptable and what isn't. Anything that this regulation doesn't allow is by definition "unacceptable", even if it's tea with biscuits.
hkwerf 16 hours ago [-]
> Anything that this regulation doesn't allow is by definition "unacceptable",
That's not true. The regulation first defines high-risk products with a narrow scope (see article 5 and annex III). It then requires risk management to be implemented. It does explicitly state what risks are acceptable, it only requires the "adoption of appropriate and targeted risk management measures" that are effective to the point that the "overall residual risk" of the is "judged to be acceptable".
IANAL, the whole story is a bit more complex. But not by much.
a3w 9 hours ago [-]
Can we ban any projects with "unacceptable risk", too? Independent of technology.
olivierduval 14 hours ago [-]
"Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater."
US did a real gift to the world with "extra-territorial" laws: now EU use it everywhere too !!!! :-)
Sooooo... the GAFAM either will have to "limit" some of their AI system when used in EU (NOT including EU citizen that may be abroad, but including foreign citizen in the EU) or to be fined.
And I guess that this kind of fines may accumulate with GDPR for example...
13 hours ago [-]
kazinator 1 days ago [-]
OK, so certain documents are allowed to exist out there; they are not banned. But if you train a mathematical function to provide a very clever form of access to those documents, that is banned.
That is similar to, say, some substance being banned above a certain concentration.
Information from AI is like moonshine. Too concentrated; too dangerous. There could be methyl alcohol in there that will make you go blind. Must control.
hkwerf 16 hours ago [-]
Training that function is not banned. Concentrating information is not banned. I have no idea where you're taking that from.
Only making use (i.e. putting into service a product containing it or placing that product on the market) of that function in a manner that is listed in article 5 (which is quite terse and reasonable) is prohibited unless covered by an exception.
Making use of that function in a manner that may be high-risk (see article 6 and annex III, also quite terse and reasonable) leads to the requirement of either documenting why it isn't high-risk or employing measures to ensure that the risk is acceptable (see article 9, item 5).
IANAL
14 hours ago [-]
mjw_byrne 15 hours ago [-]
As with GDPR, the spirit is admirable but the fundamental definitions have been hand-waved. So for the foreseeable future, the expensive lawyers you hire are going to answer the important questions with "well, we don't have much case law yet..."
Also, the definition of AI seems to exclude anything that doesn't "exhibit adaptiveness after deployment". So, a big neural network doing racist facial recognition crime prediction isn't AI as long as it can't learn on-the-fly? Is my naive HTTP request rate limiter "exhibiting adaptiveness" by keeping track of each customer's typical request rate in a float32?
Laws that regulate tech need to get into the weeds of exactly what is meant by the various terms up-front, even if that means loads of examples, clarification etc.
skapadia 9 hours ago [-]
Not using AI will soon be an unacceptable risk.
xigency 9 hours ago [-]
With thinking like that it's not a far bridge to, "not leaking all your private keys will soon be an unacceptable risk."
skapadia 5 hours ago [-]
I mean the other side of this argument, which I also support, is "Using the Internet will soon be an unacceptable risk".
kamma4434 14 hours ago [-]
I would frankly love if EU bodies were to plunge 10 K into the Realtime ad tracking industry as explained here https://news.ycombinator.com/item?id=42909921 , buy a lot of data of European citizens so that they have proof of sharing illegally, and then proceed suing the hell out of all parties involved. That would make me feel better about them doing something, Instead of regulating and regulating stuff that does not exist yet
rustc 2 days ago [-]
Does this affect open weights AI releases? Or is the ban only on the actual use for the listed cases? Because you can use open weights Mistral models to implement probably everything on that list.
ben_w 2 days ago [-]
Use and development.
I know how to make chemical weapons in two distinct ways using only items found in a perfectly normal domestic kitchen, that doesn't change the fact that chemical weapons are in fact banned.
"""The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market, or its use has an impact on people located in the EU.
The obligations can affect both providers (e.g. a developer of a CV-screening tool) and deployers of AI systems (e.g. a bank buying this screening tool). There are certain exemptions to the regulation. Research, development and prototyping activities that take place before an AI system is released on the market are not subject to these regulations. Additionally, AI systems that are exclusively designed for military, defense or national security purposes, are also exempt, regardless of the type of entity carrying out those activities.""" - https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
hkwerf 16 hours ago [-]
The law specifically refers to putting a product in to service or placing a product on the market if they are on the prohibited (article 5) or high-risk (article 6, annex III) list and don't have an applicable exception. IANAL, but it's pretty obvious that a set of weights alone clearly is not such a product.
Also note that the law has explicit exceptions for research, development, open source and personal use.
atoav 17 hours ago [-]
The (well known) problematic bit about AI is that it furthers the diffusion of responsibility that already became so commonplace with computers.
People can just handwave catastrophic decisions away with a "the computer made an error, nothing we can do". This has been the case before AI, the differrnce AI makes is just that more decisions are going to be affected by this.
What we need is to make the (legal) buck stop somewhere, ideally in a place that can positively change things. If you’re a civil engineer and your bridge collapses, because you fucked up the material selection, you go to jail. If you are a software engineer and you make design decisions in 2025 that would have had severe security implications in the 80s — and then this leads to the leaking of millions of medical records you can still YOLO it off somehow and go to work on the next thing.
The buck has to stop somewhere with software and it doesn't really. I know that is a feature for a certain type of person, but it actively makes the world worse.
hkwerf 16 hours ago [-]
That is explicitly addressed by the regulation. If an AI product is used in a way you describe it, there need to be measures put in place to reduce the risk to an acceptable level. If these measures are missing, it cannot even be put into service. This places the responsibility for the use of AI clearly with either the entity that places the product on the market (i.e. if you sell something in the EU) or that puts it into service (if a EU company uses an AI product across borders remotely). With the required employee AI training, this is further shifted onto the users if done correctly or stays with the employer if not.
lyime 21 hours ago [-]
Do the same rules apply to the government entities luke law enforcement and security services?
lksaar 21 hours ago [-]
> For example, the Act permits law enforcement to use certain systems that collect biometrics in public places if those systems help perform a “targeted search” for, say, an abduction victim, or to help prevent a “specific, substantial, and imminent” threat to life. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can’t make a decision that “produces an adverse legal effect” on a person solely based on these systems’ outputs.
yes
ItsBob 2 days ago [-]
> AI that attempts to predict people committing crimes based on their appearance.
FTFY: AI that attempts to predict people committing crimes.
By "appearance" are they talking about a guy wearing a hoodie must be a hacker or are we talking about race/colour/religious garb etc?
I'd rather they just didn't use it for any kind of criminal application at all if I have a say in it!
Just my $0.02
layer8 1 days ago [-]
The actual wording is: "based solely on the profiling of a natural person or on assessing their personality traits and characteristics".
The Techcrunch article oversimplifies and is borderline misleading.
stared 1 days ago [-]
There was a joke:
- Could you tell from an image if a man is gay?
- Depending on what he is doing.
Oarch 2 days ago [-]
It will be hard to gauge this too if it's all just vectors?
1 days ago [-]
troupo 2 days ago [-]
> I'd rather they just didn't use it for any kind of criminal application at all if I have a say in it!
Instead of relying on Techcrunch and speculating, you could read sections (33), (42), and (59) of the EU AI Act yourself.
hirokio123 23 hours ago [-]
In the EU, no risk is ever tolerated. They keep creating laws to regulate things, but do they ever measure the actual impact? With so many regulations, it must be hard to move freely. There should also be a system for repealing regulations.
barrenko 16 hours ago [-]
They don't, Brussels will just produce some 600k undecipherable pages on this topic, waste honest workers money to keep it's bureaucrat caste afloat.
With time this is worsening, the caste is ever bigger, and the system will not change until a WW2 type situation.
watwut 19 hours ago [-]
More free then America, unless you are billionaire. I mean, American billionaires are trying to export their form of anti-democraric policies, Russia is trying ro export their own fascism etc.
It is not like it was safe democracy. But, it is still one and one that cares more about own citizens then the rest. Maybe except Canada.
jart 17 hours ago [-]
[flagged]
mrfinn 13 hours ago [-]
Hope this law doesn't become into a peasant-trap.
But my gut is telling me that... that's exactly what it is. ("This and that is forbidden EXCEPT to us because blah blah blah")
profeatur 11 hours ago [-]
That’s what nearly all UE regulation is. Most of these regulators are on the payroll of 200 year old companies who maintain their control of the economy by preventing any real challengers from rising.
whereismyacc 2 days ago [-]
sounds good
sunshine-o 24 hours ago [-]
Never forget the EU is run by lobbies by design. So usually those scary regulations are not what they seem to be.
Here is what happened in most corporations when GDPR came out:
- An new Chief Privacy Officer would be appointed,
- A series of studies would be conducted by big consulting firms with a review of all processes and data flow across the organisation,
- After many meeting they would conclude that a move to the cloud (one of the big ones) is the best and safest approach. The Chief Privacy and Legal Officer would put their stamp on it with some reservations,
- This would usually accelerate a lot of outsourcing and/or workforce reduction in IT,
- Bonus if a big "data governance" platform is bought and half implemented.
bmicraft 23 hours ago [-]
> Here is what happened in most corporations when GDPR came out
Do you have a source on that, or is this what you feel like may have happened? The move to the cloud was in full swing way before GDPR came out in 2016 and got enacted in 2018. Same for outsourcing.
sunshine-o 16 hours ago [-]
I can assure you I have witnessed it in dozens of organisations and have been involved.
In terms of timeline I can tell you:
- by 2012 I already heard about that regulation but only knew it was gonna be about data protection. At that time some "Big tech" lobbying groups were already organising events in Brussels raising awareness about how important is data privacy and protection. I have been to some of those events and I even witnessed very heated exchanges between some EU people and and lobbyists about that.
Proof is a lot of people knew way before that time.
- by 2014 many big corporations were already preparing for GDPR, big budgets have already been validated. At that time they already knew it would be at least reasonably disruptive and they had to start early to prepare.
Also remember before 2014 "Windows Azure" (what would become the most successful cloud for most European corporations) was absolutely not ready as a enterprise product.
So those are not Silicon Valley startups on AWS since 2006, for many decision makers in those big corporations the GDPR upcoming problem predate the cloud solution.
myaccountonhn 17 hours ago [-]
Since GDPR came out I’ve been able to download all data that the big companies have on me, and delete it too. I’m happy for it.
watwut 19 hours ago [-]
It is not true. If you outsource, GDPR applies all the same. And companies had to literally implement changes into their software and literally did.
GDPR applies to data in cloud too.
Matthyze 19 hours ago [-]
Indeed.
“Where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.”
sunshine-o 16 hours ago [-]
Yes but here is the trick: when those companies found out their old applications had to be changed and legacy code had to be rewritten it was cheaper to move to the cloud/SaaS that is supposedly GDPR compliant.
waltercool 1 days ago [-]
Oh nooo, that will stop everyone from using AI systems.
Do European politicians understand that those laws are usually dead? There is no way a law like that can be enforced except by large companies.
Also, this kind of laws would make Europeans to stay at the loser side of the AI competition as China and mostly every US corporation doesn't care about that.
jampekka 1 days ago [-]
The usage covering "unacceptable risk" is stuff that's either illegal or heavily regulated already in human operation too.
> Also, this kind of laws would make Europeans to stay at the loser side of the AI competition as China and mostly every US corporation doesn't care about that.
Not sure that's a game I want to win.
waltercool 22 hours ago [-]
And do you think a law will prevent that?
The law will only ensure good companies like MistralAI or Black Forest Labs will stay in the shadow.
This is same idiocy like the Republican senator who wants to prohibit Deepseek usage in US.
About legality, what's the illegal thing AI shouldn't do? Many of that knowledge can be accessible already from books, even how to build weapons or explosives
jampekka 18 hours ago [-]
It will at least greatly hinder LE's capability to do massive minority report type dragnets, targetted violence incitement campaigns, or grading workers or schoolchildren based on their facial expressions etc extremely nasty stuff.
The banned use cases are very specific and concerns systems explicitly designed for such dystopian shit. AI giving advice how to build weapons or explosives is not banned here. The "unacceptable risk" category does not concern companies like MistralAI or Black Forest Labs. This is not the same idiocy.
waltercool 3 hours ago [-]
I agree with you, but how do you effectively prevent it? Standards vary across countries; what's not acceptable in Europe might be acceptable elsewhere.
For instance, discussing or questioning Nazism is illegal in Germany but allowed in many other countries. Should every LLMs be restricted globally just because Germany deems it illegal?
Similarly, certain drugs are legal in the Netherlands but illegal in other countries, sometimes even punishable by death. How do you handle such discrepancies?
Let's face it: most of the time, LLMs follow US-centric anti-racism guidelines, which aren't as prominent or necessary in many parts of the world. Many countries have diverse populations without significant racial tensions like United States and don't prioritize African, Asian, or Latino positivity to the same extent.
Moreover, in the US, discussions about the First or Second Amendment are common, even among those with opposing views, but free speech and gun rights are taboo in other societies. How do you reconcile this?
In practical terms, if an LLM refuses to answer questions because they're illegal in some countries, users will likely use uncensored models instead, rendering the restricted ones less useful. This is why censorship is never successful except by North Korea and China.
Take Stable Diffusion as an example: the most popular versions (1.5, XL, Pony) are flexible for unrestricted use, whereas intentionally censored versions (like 2.1 or 3.0) have seen limited adoption.
apwell23 1 days ago [-]
Europe is still riding high on the wealth they stole from the colonies. That well is going to run dry soon.
jampekka 1 days ago [-]
Sure. And USA will follow once the dollar dominance wanes.
I, for one, welcome our Chinese communist overlords.
AlchemistCamp 21 hours ago [-]
Both share of global investments denominated in dollars and the US share of world GDP have increased in the past 25 years.
A vibrant tech ecosystem is a large part of the reason for both.
kandesbunzler 1 days ago [-]
European politicians dont understand anything. We are led by utter morons.
And redditors here complain about trump, I'd much rather have him than these mouthbreathers we have.
mattnewton 1 days ago [-]
not to derail us into politics, but with Trump's team I'd take that trade offer for almost any EU nation's representatives in a heartbeat, if for no other reason than they would respect the process and take years to break what the US is now breaking in days.
naabb 9 hours ago [-]
Yeah trump is not a total utter moron, you're totally right, i'd rather have the insurrectionist who hired a gaggle of oligarchs to run the country. that's a much better alternative to people who actually protect your rights and the environment with regulation.
hkwerf 16 hours ago [-]
> There is no way a law like that can be enforced except by large companies.
Other fields have very similar laws in the EU and there's lots of tiny companies able to comply with those. The risk control required by this law is the same that's required by so many other EU laws. Most companies that make high-risk products have no problem at all implementing that.
thiago_fm 16 hours ago [-]
Love the EU, living here is the best.
There are plenty of companies in the EU using and developing AI even with the fact that Americans say we have "heavy regulation", it isn't just in the same ballpark as the US and China, which both have much bigger potential markets and a stronger VC base with of course, more money.
The lack of regulations from the US in AI creates a very harsh atmosphere for the population.
It's so naive to think that Meta/Google(Youtube) doesn't have power to manipulate people's opinion by showing content based on their algorithms. That's all manipulation through the use of AI.
They are thinking for you. Making you depressed, making you buy useless stuff.
Look on research on this subject and you will be surprised how much the likes of Meta and Google are getting away with.
Hope to see more EU fines for American Big Tech firms using AI to abuse people's weaknesses.
profeatur 11 hours ago [-]
> It's so naive to think that Meta/Google(Youtube) doesn't have power to manipulate people's opinion
We have that here too, except in our case it’s the government using the good old fashioned medium of television.
Ekaros 14 hours ago [-]
Actually thinking what advanced AI could do with say Meta. It could be used to rewrite the content for each user. And not even tell that they are doing it. AI proponents see nothing wrong with it. Why not allow Meta for instance change negative experiences with restaurants to positive ones. Or modify political statements made by some sides...
yieldcrv 13 hours ago [-]
so just launch everything in Liechtenstein and access the single market anyway
thrance 14 hours ago [-]
> AI that manipulates a person’s decisions subliminally or deceptively.
Can we use this to ban X and other American algorithmic political propaganda tools?
olivierduval 14 hours ago [-]
Actually that could be an interesting case... except if Twitter can show that its algorithm is "deterministic"
2rsf 14 hours ago [-]
Are they using AI for that? algorithmic doesn't necessarily mean AI
thrance 14 hours ago [-]
Right after Musk bought twitter, the code for the algorithm was open sourced and I took a look at it. I don't know if it's still open today but anyways, there was some ML stuff in it at the time. I guess it would depend on what constitutes "AI" to European legislators.
NekkoDroid 9 hours ago [-]
> I don't know if it's still open today but anyways
I am p sure that they made like major major changes after they code dumped it. Considering "verified boosts" and "elon boosts" are very noticable, with the first being an confirmed "feature", I doubt the algorithm would even remotely work with data from nowadays.
Anyway, what I wanna say is that the last commit was over 2 years ago.
thrance 8 hours ago [-]
Yes, the AI behind X's "algorithm" has been heavily skewed towards favoring far right content. That much is obvious to anyone taking a look at the X front page. This should give some more grounds to a continent-wide ban on this platform, threatening our democracies with relentless propaganda.
campl3r 19 hours ago [-]
This is great news! let's hope it's as successful as GDPR and applied in many more countries
vtashkov 17 hours ago [-]
[dead]
adityamwagh 2 days ago [-]
[flagged]
dijksterhuis 2 days ago [-]
> unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely. Some of the unacceptable activities include:
…
what follows is a list of some pretty nasty and insidious use cases.
it’s not “AI is completely banned”, it’s “consider the use cases you are working on responsibly”. only for those specific use cases, mind you.
for all other use cases not in the list, which is a significantly larger subset of development, just ensure you do the required safety/regulatory sign off work.
just like when we get our SaaS webapps evaluated for compliance to security standards, its just a box ticking exercise for the most part.
IMTDb 2 days ago [-]
> AI that tries to infer people’s emotions at work or school.
When I talk to ChatGPT advanced voice mode with a happy and upbeat tone, it replies similarly. If I talk to it in a lot neutral way it does adapt. The AI thus infers my emotions. I use ChatGPT at work, my company pays for it.
Sounds like I should sue.
Also, I am trying to implement a new policy for pull request in my tech team. We send an anonymous form to gather feedback. I sent all the responses in one block to ChatGPT and asked it to summarize the feedback. The AI indicated that “generally people seem pretty happy about the new policy”. Should I go to jail now for being clearly a deranged madmen according to the EU ?
dijksterhuis 2 days ago [-]
from the actual act, which is linked in the article
> the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions
emphasis mine.
chatgpt was not specifically put into service to monitor your emotions at work.
so it’s fine i’d say. and your pull request thing is fine.
also, you’re not trying to infer the emotions of any specific natural person. you’re trying to guage satisfaction of a process. that’s different to working out whether someone is feeling sad or feeling lonely in the workplace because they “aren’t smiling enough”.
unfortunately that’s means you can’t sue and get a pay day.
edit — i find it kind of funny that people are knee jerk reacting emotionally to this. kind of ironic when you consider the example at hand.
infecto 2 days ago [-]
And its kind of funny when non-legal experts attempt to say something is fine or not.
It depends highly on not only how its written but the spirit of what the EU is attempting to do. The knee jerk reaction is probably that historically institutions do a terrible job of writing rules and especially rules around new technology.
dijksterhuis 2 days ago [-]
you are right to call me out. IANAL. on me.
IMTDb 1 days ago [-]
An actual lawyer will probably agree with you. And also charge me to give me the same opinion.
They will also say “but if you end up being sued, it’s not my fault”.
So basically I just need to second guess everything I do, until someone somewhere gets sued and loses and another dude gets sued and loses. At that point we will have some idea about what the law really entails (at which point they change it and the cycle restarts)
In the meantime, my US competitors are just moving full steam ahead.
user432678 2 days ago [-]
I actually like it, moving to EU is currently my back-up plan when “great advances in AI” renders the rest of the world to make people loose jobs, give up privacy and overall make dystopian future a present. I’ll be happily coding my boilerplate code by hand, enjoying my boring life without AI.
joshstrange 2 days ago [-]
The idea of the EU wouldn’t just follow suit seems kind of laughable. Or rather that the EU would be some bastion of safety from AI. If these “great advances in AI” happen then EU companies will just outsource their jobs, there will be no boilerplate/CRUD jobs.
user432678 1 days ago [-]
Well, EU is not a single country. I can imagine Germany outsourcing such jobs to Poland. Oh well, I can move back “home” from UK then.
OKRainbowKid 2 days ago [-]
That doesn't seem obvious to me. Would you mind elaborating?
troupo 2 days ago [-]
[flagged]
_rm 2 days ago [-]
[flagged]
lm28469 2 days ago [-]
That's petty much what's going to happen in the US isn't it ? We'll be able to compare very soon
2 days ago [-]
2 days ago [-]
kandesbunzler 2 days ago [-]
as a german, yea pretty much.
mdhb 2 days ago [-]
[flagged]
Bancakes 2 days ago [-]
Remind me of the European Silicon Valley and its greatest accomplishments?
ben_w 1 days ago [-]
In addition to the european groups bought by US big tech, and all the European divisions of American-parented big tech firms?
A quick search reveals DeepMind, Skype, SwiftKey, Shazam, Moodstocks for the former. Bit of overlap with the latter, too, as e.g. AlphaFold is from DeepMind after getting bought.
Quick look on the Apple App store also gets me Komoot (Germany), Trade Republic (Germany), Revolut (UK), Babbel (Germany).
Aside from them, ETH Zürich and CERN are doing pretty good work, too, the latter inventing the modern hypertext based web on which you are currently reading this.
Cambridge has some decent digital tech, also has Metalysis and The Welding Institute and was where the double helix structure of DNA were found and where Stephen Hawking chose to work.
kandesbunzler 1 days ago [-]
okay so the very few examples you named have been bought up and are now part of american companies.
And did you really just bring up komoot, a outdoor biking/hiking app, as a comparison to silicon valley? lmfao
ben_w 1 days ago [-]
You seem unable or unwilling to understand:
1) why they were bought by the American companies
2) that having an American owner doesn't make them directly American or magically cause them to be in Silicon Valley
3) The country names I put in brackets
4) The location of Zürich and CERN
And instead want to focus on the fact that one specific example of a top ranked app is not all by itself an entire sector, while ignoring all the other examples *right next to it* or the fact that this was trivial to find.
To demonstrate why you're missing the wood for the trees, consider: I can accurately say "Facebook" isn't really all that important, it's just an advertising provider getting in the way of people trying to talk to each other — but that it isn't all of Silicon Valley all by itself doesn't mean its headquarters are not relevant as an example of "Silicon Valley".
HeatrayEnjoyer 2 days ago [-]
Not actively overthrowing democracy and infecting society with an attention addiction industry.
2 days ago [-]
micromacrofoot 2 days ago [-]
the US currently has a tech oligarch employing teenagers to audit government finance software
2 days ago [-]
wtcactus 17 hours ago [-]
I guess that’s how the EU is expecting to catch to the rest of the world in AI… by imposing even more regulation.
It’s maddening that a group on non elected politicians and their friends have this kind of power and are using it to destroy Europe and our future.
okokwhatever 1 days ago [-]
EU has become a true dystopia.
127 18 hours ago [-]
It's great Americans criticize the EU. I just wish they did more than (paraphrasing) "EU sucks, lol." Wouldn't hurt being a bit more constructive.
BDPW 18 hours ago [-]
Can you elaborate specifically what you think makes this law so detrimental? And to who?
naabb 9 hours ago [-]
You might be looking in the mirror
dathinab 21 hours ago [-]
ah yes countries doing their job of trying to protect their are a dystopia, sure
bmicraft 23 hours ago [-]
Ah yes, the worst of all the dystopias: The one where companies can't fuck people over as much as they'd like to. Truly a horror scenario for any tech billionaire.
sunaookami 20 hours ago [-]
It's really sad what happened to EU after von der Leyen took over. Someone must stop here and the EU commission or we will fall even more into irrelevance.
hcfman 1 days ago [-]
So is a burglar alarm that uses AI to see a person on your property and alert the owner (Same function as the old PIR sensors) now AI tech that is predicting that there is a person that might be wanting to commit a crime and thus a banned use of AI now?
Or is it something that is open to interpretation, let the courts sort of out and fine you 15,000,000 euros if someone in the EU has leverage on the courts and doesn't like you?
Oh and the courts will already kill any small startup.
dijksterhuis 1 days ago [-]
From the act:
> to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
In all fairness, that would still ban a system that understands my front door being approached by a guy in a postal uniform wearing a parcel is likely unproblematic, while a guy wearing a hoodie and a crowbar is likely to commit a crime.
dathinab 21 hours ago [-]
no it doesn't because here you are accessing their actions (approaching your front door in some unusual context like holding a crow bar)
in addition, having front doors idk. calling the police on people just because it's a "unusual" situation would be quite dystopian and would most likely for society as a whole lead to far more damage then it would prevent so instead of your door trying to "detect maybe soon to happen crime" it could "try to detect unusual situation which might require human actions" and then have the human (you someone or in a call center if you aren't available) do the actions (which might be just one or two button presses, nothing prevents you to take the action by directing the AI to do it for you)
and lets not forget we are speaking about before the break in (and maybe no break in at all because it's actually a Halloween costume or similar), if the system detects someone braking in we have a action
caseyy 18 hours ago [-]
Not really. Such a system would be evaluating the whole context and not solely profiling the person. What the person is doing (delivering parcel vs cutting a chain link fence) is not a part of their personality or profile. How much danger they’re posing in the moment also isn’t, and so on.
Arguably, an AI security system with great objective understanding of the unfolding circumstances would be a lot better than one profiling people passing by and raising an alarm each time a person that looks a certain way walks by.
It’s just that simple CV-based classification, perhaps trained with unsupervised learning, is easier in AI than observing a chain of actions. The labelled data set is usually accessible from police orgs if you want to simply train an AI to look at people and judge them based on visual traits. By the EU saying “this easy way is not good enough”, it is encouraging technological development in a way. Develop a system that’s more objective than visual profiling, and the market is yours.
DocTomoe 15 hours ago [-]
> Develop a system that’s more objective than visual profiling, and the market is yours.
Until another braindead legislator finds another thing he can rally against and throws a stick between my legs.
There are reasons innovation happens in China and - to a lower extend - in the United States. This is one of them.
WrongAssumption 22 hours ago [-]
How so?
Rendered at 01:35:50 GMT+0000 (Coordinated Universal Time) with Vercel.
I would really love to see a Q&A thread like https://news.ycombinator.com/item?id=42770125 from someone who's actually read the documents, practices law in the area, and also understands the difference between US and EU law.
https://outofthecomfortzone.frantzmiccoli.com/thoughts/2024/... and here is my shameless plug.
Your comparison to GDPR seems to be correct in a way, both are quite vague and wide. The implementation of GDPR is still unclear in certain situations and it was even worse when it was launched, the EU AI act have very little references to work with and except for very obvious area it is still a lot of a guesswork
On the other hand, if you're concerned about AI risk, I don't see how it could be otherwise. We don't have a clear grasp about what the real limits of capabilities are. Some people are promising "AGI" "just around the corner". Other people are spinning tales about gray goo. The risk of automated discrimination looms large since IBM sold Hollerith collation machines to the Holocaust.
If it delays AI "innovation" by forcing only the deployment of solutions which have had at least some check by legal to at least try to avoid harming citizens, that's ... good?
I WANT it to be difficult for AI companies to steal other people’s hard work just like I WANT Facebook to have to spend millions of dollars on lawyers to make sure whatever data they’re collecting and sharing about me doesn’t violate my rights.
- Nothing has changed in Facebook and Google data collection practices, who with other bug corps account for > 90% of data collection
- Many mid tier competitors lost market share, focusing power to Google
- EU small software companies pay estimated extra 400 EUR/year to satisfy GDPR compliance with little tangible benefits to the EU citizens.
It's called unintended consequences. We all want Zuckerberg to collect less data, but how GDPR was implemented is that it mostly hurt small businesses disproportionately. E.g. you now need to hire a lawyer to analyse if you can collect an IP address and for what purposes, as discussed here.
> The main burden falls on SMEs, which experienced an average decline in profits of 8.5 percent. In the IT sector, profits of small firms fell by 12.5 percent on average. Large firms, too, are affected, with profits declining by 7.9 percent on average. Curiously, large firms in the IT sector saw the smallest decline in profits, of “only” 4.6 percent. Specifically, the authors find “no significant impacts on large tech companies, like Facebook, Apple and Google, on either profits or sales,” putting to bed the myth that U.S. technology firms are the enemy of regulation because it hits their bottom lines.
https://datainnovation.org/2022/04/a-new-study-lays-bare-the...
Regulatory Capture, no?
> I'm old enough to remember when everyone claimed EU tech law was about to ban memes, which didn't happen...
AFAIK those parts of that law was changed somewhat
[citation needed]
This is just laughably incorrect. Literally every Fortune 500 that I work with who has operations in Europe has an entire team that owns GDPR compliance. It is one of the most successful projects to curtail businesses treating private data like poker chips since HIPAA.
Anyways, GDPR doesn't protect your data, it just specifies how companies can use it. So all my name, address, phone number, etc. will still be stored by every webshop for 10 years or so just waiting to be breached (because some tax laws).
https://noyb.eu/en
Facebook and Google got sued, paid fines, and changed their behavior. I can do an easy export of all of my FB and G data, thanks to the GDPR.
"EU small software companies pay estimated extra 400 EUR/year to satisfy GDPR compliance"
WTF? no! I work with several small companies and it's super easy to just NOT store anyone's birthday (why would you need that for e-commerce?) and to anonymize IPs (Google provides a plugin for GA). And, basically, that's it. Right now, I can't even find an example of how the GDPR has created any costs. It's more like people changed their behavior and procedures once GDPR was announced and that's "good enough" to comply.
How is the gdpr vague?
https://gdpr.eu/eu-gdpr-personal-data/
They are explicitly listed as example of PII.
Moreover, to reason about this, one also needs to take into account Art 6.2 which means there might be an additional 27 laws you need to find and understand.
Note, however, that recital 30 which you quoted is explicitly NOT referenced by Art. 6, at least according to this inofficial site: https://gdpr-info.eu/art-6-gdpr/
This particular case might be solved through hashing, but then there are only 4.2bn IPs so easy to try out all hashes. Or maybe it's only OK with IPv6?
I find this vague or at least hard to reconcile with technical everyday reality, and doing it well can take enormous amounts of time and money that are not spent on advancing anything of value.
In addition to the other answers, I want to point out that recital 49 says that it is possible under legitimate interest (6(1)f).
No, it doesn't. Subsections b, c, and f roughly cover this. On top of that, no one is going to come at you with fines for doing regular business things as long as you don't store this data indefinitely long, sell it to third parties, or use it for tracking. As laid out in Article 1.1.
On top of that, for many businesses existing laws override GDPR. E.g. banks have to keep personal records around for many years.
That being said: it is extremely strict, a lot of lawyers like to make it stricter (because for them it means safer) and a lot of lawyers have to back of under business constraint (that push to sometimes got below legal requirements). My experience is that no two companies have the same understanding of GDPR.
Disclaimer: i am advising a company that sells AI act related compliance tooling
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
- AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
- AI that uses biometrics to infer a person’s characteristics
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
All of the above can be achieved with just software, statistics, old ML techniques, i.e. 'non hype' AI kind of software.
I am not familiar with the detail of the EU AI pact but it seems like the article is simplifying important details.
I assume the ban is on the purpose/usage rather than whatever technology is used under the hood, right?
For the purposes of this Regulation, the following definitions apply:
(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; Related: Recital 12
https://artificialintelligenceact.eu/article/3/ https://artificialintelligenceact.eu/recital/12/ So, it seems like yes, software, if it is non-deterministic enough would qualify. My impression is that software that simply takes "if your income is below this threshold, we deny you a credit card." Would be fine, but somewhere along the line when your decision tree grows large enough, it probably changes.
https://uk.practicallaw.thomsonreuters.com/Glossary/UKPracti... describes a bit of how recitals interact with the operating law; they're explicitly used for disambiguation.
So your hip new AI startup that's actually just hand-written regexes under the hood is likely safe for now!
(Not a lawyer, this is neither legal advice nor startup advice.)
That's every AI system. It follows the rules defined solely by the programmers (who I suppose might sometimes stretch the definition of natural persons) who made pytorch or whatever framework.
Just rerun the application with higher income until you get a pass. Then tell the person their application was rejected because income was not at least whatever that passing income amount was.
Maybe also vary some other inputs to see if it is possible to get a pass without raising income as much, and add to the explanation that they could lower the income needed by say getting a higher credit score or lowering your outstanding debt or not changing jobs as often or whatever.
It is simply that, eg., "on some historical dataset, this boundary most relibaly predicted default" -- but this confers no normative reason to accept or reject any individual application (cf. the ecological fallacy). And so, in a very literal way, there is no normative reason the operator of this model has in accepting/rejecting any individual application.
Why do you need an AI if what you are doing is "if X < N" ?
For someone with a great credit history, lots of assets, a long term job in a stable position, and low debt they might be approved with a lower income than someone with a poor credit history whose income comes from a job in a volatile field.
There might be some absolute requirements, such as the person have a certain minimum income independent of all those other factors, and they they have a certain minimum credit score, and so on. If the application is rejected because it doesn't meet one of those then sure, you can just do a simple check and report that.
But most applications will be above the absolute minimums in all parameters and the rejection is because some more complicated function of all the criteria didn't meet the requirements.
But you can't just tell the person "We put all your numbers into this black box and it said 'no'. You have to give them specific reasons their application was rejected.
But banks, at least in my country (central EU), don't have to explain their reasons for rejecting a mortgage application. So why would their automated systems have to?
There is so called three line system -- operational line does the actual thing (approves or rejects the mortgage), the second line gives the operational line the manual to do so the right way, the internal audit should keep an eye that whatever the first line is doing is actually what the policy says they should be doing.
It's entirely plausable that operational line is an actual LLM which is trained on a policy that the compliance department drafted and the audit department occasionally checks the outputs of the modal against the policy.
But at this point it's much easier to use LLM to write deterministic function in your favorite lisp based on the policy and run that to make decisions.
[1] https://en.wikipedia.org/wiki/Equal_Credit_Opportunity_Act#R...
[2] https://www.nolo.com/legal-encyclopedia/do-lenders-have-to-t...
The obvious straightforward read is along the lines of: imagine you make some software, which then does something bad, and you end up in court defending yourself with an argument along the lines of, "I didn't explicitly make it do it, this behavior was a possible outcome (i.e. not a bug) but wasn't something we intended or could've reasonably predicted" -- if that argument has a chance of holding water, then the system in question does not fall under the exception your quoted.
The overall point seems to be to make sure systems that can cause harm always have humans that can be held accountable. Software where it's possible to trace the bad outcome back to specific decisions made by specific people who should've known better is OK. Software that's adaptive to the point it can do harm "on its own" and leaves no one but "the system" to blame is not allowed in those applications.
We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
The distinction is accountability. Determining whether a human decided the outcome, or it was decided by an obscure black box where data is algebraically twisted and turned in a way no human can fully predict today.
Legally that accountability makes all the difference. It's why companies scurry to use AI for all the crap they want to wash their hands of. "Unacceptable risk AI" will probably simply mean "AI where no human accepted the risk", and with it the legal repercussions for the AI's output.
In reality, we will wait until someone violates the obvious spirit of this so egregiously and ignore multiple warnings to that end and wind up in court (a la the GDPR suits). This seems pretty clear.
If you use Copilot to generate code by essentially just letting it autocomplete the entire code base with little supervision, yeah, sure, that might maybe fall under this law somehow.
If you use Copilot like you would use autocomplete, i.e. by letting it fill in some sections but making step-by-step decisions about whether the code reflects your intent or not, it's not functionally different from having written that code by hand as far as this law is concerned.
But looking at these two options, nobody actually does the first one and then just leaves it at that. Letting an LLM generate code and then shipping it without having a human first reason about and verify it is not by itself a useful or complete process. It's far more likely this is just a part of a process that uses acceptance tests to verify the code and then feeds the results back into the system to generate new code and so on. But if you include this context, it's pretty obvious that this indeed would describe an "AI system" and the fact there's generated code involved is just a red herring.
So no, your gotcha doesn't work. You didn't find a loophole (or anti-loophole?) that brings down the entire legal system.
Two different machines can be designed for the same use case, but the possible bad outcomes in either "correct" use or malicious use of the two machines can be very different. So it is reasonable to ban the one which has unacceptable bad outcomes.
For example, while both a bicycle and a dirt bike are mobility vehicles, a park may allow one and ban the other.
It would seem accountable would only be higher in systems where humans were not part of the decision making process.
So if an AI can't change its weights after deployment, it's not really an AI? That doesn't make sense.
As for the other criteria, they're so vague I think a thermostat might apply.
A learning thermostat would apply, say one that uses historical records to predict changes in temperature and preemptively adjusts. And it would be low risk and unregulated in most cases. But attach to a self-heating crib or premature baby incubator and that would jump to high risk and you might have to prove it is safe.
Quite.
One wonders if the people who came up with this have any actual understanding of the technology they're attempting to regulate.
As long as the thermostat doesn't control people's lives, that's fine.
We already have ways to predict avalanche risk that are well understood and explainable. There should be a high threshold on replacing that.
The precise language on high risk is here [1], but some enumerations are placed in the annex, which (!!!) can be amended by the commission, if I am not completely mistaken. So this is very much a dynamic regulation.
[1] https://artificialintelligenceact.eu/article/6/
Just joking, but I think it is a funny parallel. Also because of it being probably solely human made rules.
yes, and with the same problems if applied to the same use cases in the same way
in turn they get regulated, too
it would be strange to limited a law to some specific technical implementation, this isn't some let's fight the hype regulation but a serious long term effort to regulate automatized decision making and classification processes which pose a increased or high risk to society
To me it’s just generative AI, LLMs, media generation. But I see the CNN folks suddenly getting “AI” attention. Anything deep learning really. It’s pretty weird. Even our old batch processing, SLURM based clusters with GPU nodes are now “AI Factories”.
At least that's what we used to do.
Btw, you bring up the perspective of realising that our tools weren't adequate. But it's broader: completely ignoring the tools, we also realise that eg being able to play eg chess really, really well didn't actually capture what we wanted to mean by 'intelligence'. Similar for other outcomes.
That's not what AI is.
Artificial Intelligence has decades of use in academia. Even a script which plays Tic Tac Toe is AI. LLMs have advanced the field profoundly and gained widespread use. But that doesn't mean that a Tic Tac Toe bot is no longer AI.
When a term passes to the mainstream people manufacture their own idea of what it means. This has happened to the term "hacker". But that doesn't mean decades of AI papers are wrong because the public uses a different definition.
It's similar to the professional vs the public understanding of the term "prop" in movie making. People were criticizing Alec Baldwin for using a real gun on the set of Rust instead of a "prop" gun. But as movie professionals explained, a real gun is a prop gun. Prop in theater/movies just means property. It's anything that's used in the production. Prop guns can be plastic replicas, real guns which have been disabled, or actually firing guns. Just because the public thinks "prop" means "fake", doesn't mean movie makers have to change their terms.
This is not about data collection (GDPR already takes care of that), but about AI-based categorization and identification.
"AI system" and other terms are defined in article 3: https://artificialintelligenceact.eu/article/3/
Trying to define it for scope was IMHO a mistake.
Their deep meaning is "we don't want machines to make decisions". A key point for them has always been "explainability".
GDPR has a provision about "profiling" and "automated decision making" for key aspects of life. E.g. if you ask for a mortgage (pretty important life changing/affecting decision) and the bank rejects it you a) can ask them "why" and they MUST explain, in writing, and b) if the decision was made in a system that was fed your data (demographic & financial) you can request that a Human to repeat the 'calculations'.
Good luck having ChatGTP explaining.
They are trying to avoid having the dystopian nightmare of the (apologies - I don't mean to disrespect the dead, I mean to disrespect the industry) Insurance & Healthcare in the US, where a system gets to decide 'your claim is denied' against humans' (doctors in this case)(sometimes imperfect) consultations because one parameter writes "make X amount of profit above all else" (perhaps not coded with this precise parameter but somehow else).
Now, understanding the (personal) data collection and send to companies in the US (or other countries) that don't fall under the Adequacy Decisions [0] and combining that with the aforementioned (decision-making) risks, using LLMs in Production is 'very risky'.
Using Copilot for writing code is very much different because there the control of "converting the code to binaries, and moving said binaries to Prod env." (they used to call them Librarians back in the day...), so Human Intervention is required to do code review, code test, etc (just in case SkyNet wrote code to export the data 'back home' to OpenAI, xAI, or any other AI company it came from).
I haven't read the regulation lately/in its final text (I contributed and commented some when it was still being drafted), and/but I remember the discussions on the matter.
[0]: https://commission.europa.eu/law/law-topic/data-protection/i...
EDIT: ultimately we want humans to have the final word, not machines.
They will interpret "predict" as merely "report" or "act on".
This is terrible.
> AI that tries to infer people’s emotions at work or school
I wonder how broadly this will be construed. For example if an agent uses CoT and they needs emotional state as part of that, can it be used in a work or school setting at all?
So, this targets the use case of a third party using AI to detect the emotional state of a person.
Then I started thinking how this could be used in restaurants to see if waiters smile to the people they are serving Or in customer service (you can actually hear it when people smile on the phone)
Then I realised that this kind of tech would definitely lead to abuse
(btw that's not the reason I didn't build it, it was just not that easy to build)
If I must intact with an AI for this, I'd prefer that it infer my emotions correctly.
It might well be a useful tool to point at yourself.
It's an entirely inappropriate one to point at someone else. If you can't imagine having someone estimate your emotional state (usually incorrectly), and use that as a basis to disregard your opinion, you've lived a very different life to mine. Don't let them hide behind "the AI agreed with my assessment".
The regulation explicitly provides an exception for medical reasons:
Yes. This is how you know that all the people screaming about the EU overregulating and how the EU will miss all that AI innovation haven't even bothered to Google or ask their preferred LLM about the legislation. It's mostly just common sense to avoid EU citizens having their rights or lives decided by blackbox algorithms nobody can explain, be it in a Post Office (UK) scandal style, or US healthcare style.
The "business investors" and "innovators" can take this kind of business elsewhere.
This kind of talk where regulators are assaulted by free marketeers and freedom fighters is unacceptable here.
Let us not misinterpret business people as "innovators", if what they do is not net positive for the society, they do not belong here.
[1] https://www.europarl.europa.eu/news/fr/press-room/20240308IP...
The understanding is that interpreting laws leads to bias, partiality, and injustice; while following the letter of the law equally in each situation is the most just approach.
I lived in Lithuania for a while and at the time, there was a big national debate about how “family” should be defined in laws — what people it can and can’t include.
So yes — a lot of emphasis is put on verbose definitions in literalist legal texts. And very very verbose explanations of many edge cases, too,
I know first hand it will be very hard to read Lithuanian legal texts for someone who is not a native speaker of the language, and even for natives it’s a challenge. So you could instead google “literalist legal systems”, and I believe you’ll find at least some examples/more context in English somewhere.
It's also quite clear that places without strong privacy protections like the US are developing into dystopian hellscapes.
Early adopters signed contracts with companies that provided shitty WiFi at high prices for a long time. A $500 hotel would have $30/night connections that were slow, while the Courtyard Marriott had it for free.
You can't have nice things, but on the bright side Google/Apple/Facebook won't know what you had for dinner.
Now give us your whole financial transaction and travel history, so we can share it with the US, a hostile country, citizen!
Nevermind the fact that you obviously come from a previliged position if you think that money is all that's important. You're blinded.
Then there's the nontrivial number of especially local US news sources which now give me a cheerful "451 Unavailable For Legal Reasons" error code.
Then there's the outright stupid stuff - like lightbulbs that do not cost 15 euros a piece (to save 'energy'), or drinking straws that do not dissolve in my coke within the first minute (to avoid 'disposable plastics'). There are hundreds of examples like that.
The EU is a regulation juggernaut, and is making the world an actively worse place for everyone globally. See "Cookie Banners".
So the EU should not control where your data is processed? You can't claim in one comment to be bummed about data exchanges between the EU and the US (which you do), and then not understand why there are regulations in place that are slowing down the roll-out of things like Apple Intelligence, for your benefit.
1. I am giving my data freely and because of my own decision to an organization I trust and
2. The state is taking my data by force of law to share it with an inherently untrustworthy organization.
I understood he was referring to incandescent light bulbs, which have been largely regulated out of the market. So you now need to get an "Edison light bulb" which circumvenes regulation but costs significantly more.
https://en.wikipedia.org/wiki/Phase-out_of_incandescent_ligh...
> A ban covering most general service incandescent lamps took effect in the United States in 2023
So you can't even buy them in the US anymore, either. And cheap LEDs are available everywhere, with many color temperatures to choose from.
Yes, it only affects airlines that have connections to the US. But if I book Lufthansa from Frankfurt to Tokyo, the PNR will still be sent to the US, for Lufthansa has connections to the US.
Yes, there are 'safeguards' in there, to shackle the DHS to be responsible with the data - but who seriously thinks the data, once in US hands, is used responsibly and only for the matters outlined in the treaty? The US has been less of a reliable partner for decades now.
Oh, right. They won't do that for financial transactions, right? Right?
https://eur-lex.europa.eu/EN/legal-content/summary/agreement...
Any proof of that claim? The agreement specifically mentions flights between the EU and the US, so any departure from that (like the scenario you describe) is unlawful, according to my own understanding.
Article 2.1 clearly states it is applicable to all EU airlines *operating* flights to or from the US. That does not mean they ONLY have to provide PNR FOR those flights
Article 3 speaks about "Data in their (the airlines) reservation systems". There's no limitation to only US-related flights.
The specific mention of flights to and from the US you are likely refer to is in the preamble, referencing a law the US set up prior.
Both document clearly define the uses cases that are applicable for the data sharing, and the second document linked by you also explicitly states that US has to put same effort to provide same capabilities to EU as well.
We elected a President who tried to lead an armed insurrection but we'll never press criminal charges because we elected him President again.
Sorry, but anything the EU has ever done pales in comparison with that.
They hope the paperwork will be complete by 2053, which will allow an EU president to, hopefully, attempt some kind of coup (if everything is filled out correctly) sometime before 2060.
It is the utter bane if "move fast and break things", and I'm so glad to have it.
I will never understand the submissive disposition of Americans to billionaires who sell them out. They are all about being rugged Cow Boys while smashing systems that foster their own well-being. It's like their pathology to be independent makes them shoot at their own feet. Utterly baffling.
Except that the person responsible for travesty of justice framing 9 innocent people in this Dutch series is currently the president of the court of Maastricht.
https://npo.nl/start/serie/de-villamoord
Remember. The courts have the say as to who wins and looses in these new vague laws. The ones running the courts have to not be corrupt. But the case above shows that this situation is in fact not the case.
> AI that manipulates a person’s decisions subliminally or deceptively.
That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Or is this limited specifically to LLMs, as OpenAI has so successfully convinced us that LLMs really are Aai and previous ML tools weren't?
> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm
https://artificialintelligenceact.eu/recital/29/
Either the behavior in question is actually bad in which case there shouldn't be exceptions, or there's actually nothing inherently wrong with it in which case you have misidentified the actual problem and are probably needlessly criminalizing a huge swathe of normal behavior beyond just the one exception you happened to think of.
Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.
It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?
Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.
Because instead of reading the source, you're reading a sensationalist article.
> That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
----
We're going to get a repeat of GDPR aren't we? Where 8 years in people arguing about it have never read anything beyond twitter hot takes and sensationalist articles?
And in reading the act, I didn't see any clear definitions. They have broad references to what reads much like any ML algorithm, with carve outs for areas where manipulating or influencing is expected (like advertising).
Where in the act does it actually define the bar for a technology to be considered AI? A link or a quote would be really helpful here, I didn't see such a description but it is easy to miss in legal texts.
You could point out a specific section or page number, instead of wasting everyone's time. The vast majority of people who have an interest in this subject do not have a strong enough interest to do what you have claim to have done.
You could have shared, right here, the knowledge that came from that reading. At least a hundred interested people who would have come across the pointing out of this clear definition within the act in your comment will now instead continue ignorantly making decisions you disagree with. Victory?
It is a simple law. You can read it in an afternoon. If you still don't understand it 8 years later, it's not the fault of the law.
> instead of 11 chapters and 99 sections
News flash: humans and their affairs are complicated
> all anyone got as a benefit from it is cookie banners
Please show me where GDPR requires cookie banners.
Bonus points: who is responsible for the cookie banners.
Double bonus points: why HN hails Apple for implementing "ask apps not to track", boos Facebook and others for invasive tracking, ... and boos GDPR which literally tells companies not to track users
That's the bit everyone forget. GDPR didn't ask for cookie banners at all. It asked for consent in case consent is needed.
And most of the time consent is not needed since I just can say "no cookies" to many websites and everything is just fine.
If even consent does not apply, then the data shall not be processed. That's the end of it.
They got a few support tickets from people who thought they were still tracking, but just removed the banner.
By putting cookie banners everywhere and pretending that they are a requirement of the GDPR, the owners of the websites (or of the tracking systems attached to those websites) (1) provide an opportunity for people to say "yes" to tracking they would almost certainly actually prefer not to happen, and (2) inflict an annoyance on people and blame it on the GDPR.
The result: huge numbers of people think that the GDPR is a stupid law whose main effect is to produce unnecessary cookie banners, and argue against any other legislation that looks like it, and resent the organization responsible for it.
Which reduces the likely future amount of legislation that might get in the way of extracting the maximum in profit by spying on people and selling their personal information to advertisers.
Which is ... not a stupid thing to do, if you are in the business of spying on people and selling their personal information to advertisers.
Corporate sites track you and need banner. Ir is intentionally obnoxious so that you click accept all.
That partially explains the state of the tech industry in the EU.
But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
Consent for tracking must be freely given. You can't give someone something in return for it.
(And they are allowed to run as many non-tracking ads as they want.)
And? With GDPR the EU decided that private data cannot be used as a form of payment. It can only be voluntarily given. Similarly to using ones body. You can fuck whoever you want and you can give your organs if you so choose but no business is allowed to be payed in sex or organs.
But how is your data that you give to Facebook “private” to you? Facebook isn’t sharing your data to others. Ad buyers tell Facebook “Put this ad in front of people between 25-30 who look at pages that are similar to $x on Facebook”
Well, per GDPR they aren't allowed to do that. Are they giving that option to users outside of EU? Why Not?
> The EU won’t let people make that choice are you saying people in the EU are too dumb to decide for themselves?
No I do not think that. What made you think that I think that?
What about sex and organs? In your opinion should businesses be allowed to charge you with those?
> But how is your data that you give to Facebook “private” to you?
I didn't give it to them. What is so hard to understand about that?
Are you saying that your browsing data isn't private to you? Care to share it?
Because no other place thinks that their citizens are too dumb to make informed choices.
> What about sex and organs? In your opinion should businesses be allowed to charge you with those?
If consenting adults decide they want to have sex as a financial arrangement why not? Do you think these 25 year old “girlfriends” of 70 year old millionaires are there for the love?
> I didn't give it to them. What is so hard to understand about that?
When you are on Facebook’s platform and you tell them your name, interests, relationship status, check ins, and on their site, you’re not voluntarily giving them your data?
> Are you saying that your browsing data isn't private to you? Care to share it?
If I am using a service and giving that service information about me, yes I expect that service to have information about me.
Just like right now, HN knows my email address and my comment history and where I access this site from.
From the European mindset: private data is not "given" to a company, the company is temporarily allowed to use the data while that person engages in a relationship with the company, the data remains owned by the person (think copyright and licensing of artistic works).
American companies: think that they are granted ownership of data, just because they collect it. Therefore they cannot understand or don't want to comply with things like GDPR where they must ask to collect data and even then must only use it according to the whims of the person to whom it belongs.
In case of Facebook (or tracking generally) you had no chance to make an informed choice. You are just tracked, and your data is sold to hundreds of "partners" with no possibility to say "no"
> Just like right now, HN knows my email address and my comment history and where I access this site from.
And that is fine. You'd know that if you spent about one afternoon reading through GDPR, a regulation that has been around for 8 years.
A distinction without meaning. Here's your original statement: "no other place thinks that their citizens are too dumb to make informed choices."
Questions:
At which point do you make informed choice about the data that Facebook collects on you?
At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
At which point do you make an informed choice to let Facebook use any and all data it has on you to train Facebook's AI?
Bonus questions:
At which point did Facebook actually start give users at least some information on the data they collect and letting them do an informed choice?
You make an “informed choice” when you create a Facebook account, give Facebook your name, date of birth, your relationship status and who you are in a relationship with, your sexual orientation, when you check in to where you have been, when you click on and buy from advertisers, when you join a Facebook group, when you tell it who your friends are…
Should I go on? At each point you made an affirmative choice about giving Facebook your information.
> At which point do you make informed choice about Facebook tracking you across the internet, even on websites that do not belong to Facebook, and through third parties that Facebook doesn't own?
That hasn’t been the case since 2018.
https://martech.org/facebooks-removal-of-third-party-targeti...
With ATT, Facebook doesn’t collect data from third party apps at least on iOS if you opt out. It’s cost Facebook billions of dollars
https://www.forbes.com/sites/kateoflahertyuk/2022/04/23/appl...
> At which point did Facebook actually start give users at least some information on the data they collect and letting them do an informed choice?
https://www.vox.com/2018/4/14/17236072/facebook-mark-zuckerb...
[0] https://www.theverge.com/2018/4/11/17225482/facebook-shadow-...
No, being free to abuse others is not a positive feature. Not for tech, not for politics, not for business.
So, the companies that implement these cookie banners are entirely without blame, right?
So what is your solution?
Reminder: GDPR is general data protection regulation. It doesn't deal with cookies at all. It deals with tracking, collecting and keeping of user data. Doesn't matter if it's on the internet, in you phone app, or in an ofline business.
Reminder: if your solution is "this should've been built into the browser", then: 1) GDPR doesn't deal with specific tech (because tech changes), 2) when governments mandates specific solutions they are called overreaching overbearing tyrants and 3) why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser even though it's been 8 years already?
> But guess which had a more deleterious effect on Facebook ad revenue and tracking - Apples ATT or the GDPR?
In the long run most likely GDPR (and that's why Facebook is fighting EU in courts, and only fights Apple in newspaper ads), because Apple's "ask apps to not track" doesn't work. This was literally top article on HN just yesterday: "Everyone knows your location: tracking myself down through in-app ads" https://timsh.org/tracking-myself-down-through-in-app-ads/
So what is your solution to that?
They made no such announcement after the GDPR.
What’s my solution? There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address. For instance, neither Overcast or Apple’s podcast app actively track you or have a third party ad SDK [1]. But since they and every other real podcast player GET both the RSS feed and audio directly from the hosting provider, the hosting provider can do dynamic ad insertion based on your location by correlating it to your IP address.
What I personally do avoid is not use ad supported apps because I find them janky. On my computer at least, I use the ChatGPT plug in for Chrome and it’s now my default search engine. I pay for ChatGPT and the paid version has had built in search for years.
And yet they make no move against Apple, and they are fighting EU in courts. Hence long term.
> There isn’t one, you know because of the way the entire internet works, the server is going to always have your IP address.
Having my IP address is totally fine under GDPR.
What is not fine under my GDPR is to use this IP address (or other data) for, say, indefinite tracking.
For example, some of these completely innocent companies that were forced to show cookie banners or something, and that only want to show ads, store precise geolocation data for 10+ years.
I guess something something informed consent and server will always have IP address or something.
> What I personally do avoid is not use ad supported apps because I find them janky.
So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
What “move” could they do against Apple?
> So you managed to give me a non-answer based on your complete ignorance of what GDPR is about.
You asked me how do I avoid it? I do it by being an intelligent adult who can make my own choices
Europe's tech sector will continue to wither as America and others surge ahead.
You can't regulate your way to technological leadership.
You can write about anything to make it sound bad, even when it's good, and vice versa.
Need to focus on outcomes.
I think this is a massive oversight, for a few reasons:
1. Things will continue to be done, just elsewhere. The EU could find itself scrambling to catch-up (again) because of their own regulation.
2. Increased oversight is only part of the picture, the real challenge is that even with the oversight, proving that AI is acceptably safe, or that the risk is acceptable.
3. Some things are inherently not safe, e.g. war. I know many (almost all) military tech companies using AI, and the EU is about to become an impossible investment zone for these guys.
I think this will make investment into the EU tough, given tonnes of investment is now focused around AI. AI is and will likely remain the fuel to economic growth for quite some time, and the EU adding a time/money tax to the fuel.
Should have been
> AI that attempts to predict people committing crimes
Do you think they are going to fine their own initiatives out of existence? I don't think so.
However, they also have a completely extrajudicial approach to fighting organised crime. Guaranteed to be using AI approaches on the banned list. But you won't have get any freedom of information request granted investigating anything like that.
For example, any kind of investigation would often involve knowing which person filled a particular role. They won't grant such requests, claiming it involves a person, so it's personally. They won't tell you.
Let's have a few more new laws to provide the citizens please, not government slapp handles.
> 2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
https://artificialintelligenceact.eu/article/59/
Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
But ECHR is not part of EU law, especially it is not binding on the European Commission (in the context of it being a federal or seemingly federal political executive). This creates a catch-22 where member states might be violating ECHR but are mandated by EU law, though this is a very fringe consequence arising out of legal fiction and failed plans to federalize EU. Most recently, this legal fiction has become relevant in Chat Control discourse.
Great Britain and Poland have explicit opt-outs out of some European law.
Your original take: "Should have been: AI that attempts to predict people committing crimes"
Article 42. literally:
--- start quote ---
In line with the presumption of innocence, natural persons in the Union should always be judged on their actual behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof.
Therefore, risk assessments carried out with regard to natural persons in order to assess the likelihood of their offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics should be prohibited.
In any case, that prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for example on the basis of known trafficking routes.
--- end quote ---
> Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
Key missed point: "subject to the same cumulative conditions as referred to in paragraph 1."
Where paragraph 1 is "In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: ... list of conditions ..."
-----
In before "but governments can do whatever they want". Yes, they can, and they will. Does it mean we need to stop any and all legislation and regulation because "government will do what government will do"?
I think the EU has done better following its own rules than most other countries (not that it's perfect in any way).
It might be too little too late to stop the flood though: https://www.foxnews.com/us/tech-company-boasts-its-ai-can-pr...
I think it’s more likely that companies would adhere to EU regulations and use the same model everywhere or implement some kind of filter.
When I attended a conference about this I remember the distinction between "Provider" and "Deployer" being discussed. Providers are manufacturers developing a tool, deployers are professional users making a service available using the tool. A deployer may deploy a provided AI tool/model in a way that falls within the definition of unacceptable risk, and it is (also) the deployer's responsibility to ensure compliance.
The example given was of a university using AI for grading. The university is a deployer, and it is their responsibility to conduct a rights impact assessment before deploying the tool to its internal users.
This was compared to normal EU-style product safety regulation, which is directed at the manufacturer (what would be the provider here): if you make a stuffed toy, don’t put in such and such chemicals, etc. Here, the _application_ of the tool is under scrutiny as well vs just the tool itself. Note that this is based on very hasty notes[0] from the talk - I'm not sure to what extent the provider vs deployer responsibility divide is actually codified in the act.
[0] https://liza.io/ai-act-conference-2024-keynote-notes-navigat...
link to the q&a: https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
(both linked in the article)
It is probably would be as useful, as GDPR. Like of course, it sounds nice on the paper, but in reality it will get drown in a lot of legalize. Like with tracking consent in forms nowadays. Do you know which companies you gave consent and when? - me neither.
The issue with such laws, is that they are extremely wide and hard to regulate/enforce/check. But making regulation would make a few political points. While probably not so useful in real life.
We already do a lot falling under these baskets for years, big tech uses AI for algorithms left and right. "Ooopsie, we removed your youtube channel / application, because our AI system said so. You can talk to another AI system next." - we already have these, but I don't hear any reasonable feedback from EU for this.
Basically, big companies with strong legal departments would find the way around the rules. Small startups would be forced to move.
This is a strange one. Arguably this is the objective of marketing in general. Therefore, I'm not sure why draw the line only when AI is involved.
And the obvious whataboutism is obvious. Yes, you can find other sources for information on, say, developing bio weapons elsewhere. Does that mean you should have systems that aid you in collecting, synthesizing and displaying that information? That with the right interfaces and actuators can actually help you move towards that goal?
There's a line somewhere, that is very hard to draw, and yet should be drawn regardless.
The threshold to building any of these save nukes is extremely low, and nukes are only high because there are fewer use cases for radioactive material so it's simply less available.
https://vpc.org/press/states-with-strong-gun-laws-and-lower-...
I'll gladly live in a country with no AI at all. Give me Dune post-Butlerian jihad levels of AI outlawing and I'll move there. I strongly believe that myself and all the people living there will be much happier.
That's not true. The regulation first defines high-risk products with a narrow scope (see article 5 and annex III). It then requires risk management to be implemented. It does explicitly state what risks are acceptable, it only requires the "adoption of appropriate and targeted risk management measures" that are effective to the point that the "overall residual risk" of the is "judged to be acceptable".
IANAL, the whole story is a bit more complex. But not by much.
US did a real gift to the world with "extra-territorial" laws: now EU use it everywhere too !!!! :-)
Sooooo... the GAFAM either will have to "limit" some of their AI system when used in EU (NOT including EU citizen that may be abroad, but including foreign citizen in the EU) or to be fined.
And I guess that this kind of fines may accumulate with GDPR for example...
That is similar to, say, some substance being banned above a certain concentration.
Information from AI is like moonshine. Too concentrated; too dangerous. There could be methyl alcohol in there that will make you go blind. Must control.
Only making use (i.e. putting into service a product containing it or placing that product on the market) of that function in a manner that is listed in article 5 (which is quite terse and reasonable) is prohibited unless covered by an exception.
Making use of that function in a manner that may be high-risk (see article 6 and annex III, also quite terse and reasonable) leads to the requirement of either documenting why it isn't high-risk or employing measures to ensure that the risk is acceptable (see article 9, item 5).
IANAL
Also, the definition of AI seems to exclude anything that doesn't "exhibit adaptiveness after deployment". So, a big neural network doing racist facial recognition crime prediction isn't AI as long as it can't learn on-the-fly? Is my naive HTTP request rate limiter "exhibiting adaptiveness" by keeping track of each customer's typical request rate in a float32?
Laws that regulate tech need to get into the weeds of exactly what is meant by the various terms up-front, even if that means loads of examples, clarification etc.
I know how to make chemical weapons in two distinct ways using only items found in a perfectly normal domestic kitchen, that doesn't change the fact that chemical weapons are in fact banned.
"""The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market, or its use has an impact on people located in the EU.
The obligations can affect both providers (e.g. a developer of a CV-screening tool) and deployers of AI systems (e.g. a bank buying this screening tool). There are certain exemptions to the regulation. Research, development and prototyping activities that take place before an AI system is released on the market are not subject to these regulations. Additionally, AI systems that are exclusively designed for military, defense or national security purposes, are also exempt, regardless of the type of entity carrying out those activities.""" - https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
Also note that the law has explicit exceptions for research, development, open source and personal use.
People can just handwave catastrophic decisions away with a "the computer made an error, nothing we can do". This has been the case before AI, the differrnce AI makes is just that more decisions are going to be affected by this.
What we need is to make the (legal) buck stop somewhere, ideally in a place that can positively change things. If you’re a civil engineer and your bridge collapses, because you fucked up the material selection, you go to jail. If you are a software engineer and you make design decisions in 2025 that would have had severe security implications in the 80s — and then this leads to the leaking of millions of medical records you can still YOLO it off somehow and go to work on the next thing.
The buck has to stop somewhere with software and it doesn't really. I know that is a feature for a certain type of person, but it actively makes the world worse.
yes
FTFY: AI that attempts to predict people committing crimes.
By "appearance" are they talking about a guy wearing a hoodie must be a hacker or are we talking about race/colour/religious garb etc?
I'd rather they just didn't use it for any kind of criminal application at all if I have a say in it!
Just my $0.02
The Techcrunch article oversimplifies and is borderline misleading.
- Could you tell from an image if a man is gay?
- Depending on what he is doing.
Instead of relying on Techcrunch and speculating, you could read sections (33), (42), and (59) of the EU AI Act yourself.
With time this is worsening, the caste is ever bigger, and the system will not change until a WW2 type situation.
It is not like it was safe democracy. But, it is still one and one that cares more about own citizens then the rest. Maybe except Canada.
Here is what happened in most corporations when GDPR came out:
- An new Chief Privacy Officer would be appointed,
- A series of studies would be conducted by big consulting firms with a review of all processes and data flow across the organisation,
- After many meeting they would conclude that a move to the cloud (one of the big ones) is the best and safest approach. The Chief Privacy and Legal Officer would put their stamp on it with some reservations,
- This would usually accelerate a lot of outsourcing and/or workforce reduction in IT,
- Bonus if a big "data governance" platform is bought and half implemented.
Do you have a source on that, or is this what you feel like may have happened? The move to the cloud was in full swing way before GDPR came out in 2016 and got enacted in 2018. Same for outsourcing.
In terms of timeline I can tell you:
- by 2012 I already heard about that regulation but only knew it was gonna be about data protection. At that time some "Big tech" lobbying groups were already organising events in Brussels raising awareness about how important is data privacy and protection. I have been to some of those events and I even witnessed very heated exchanges between some EU people and and lobbyists about that.
Proof is a lot of people knew way before that time.
- by 2014 many big corporations were already preparing for GDPR, big budgets have already been validated. At that time they already knew it would be at least reasonably disruptive and they had to start early to prepare.
Also remember before 2014 "Windows Azure" (what would become the most successful cloud for most European corporations) was absolutely not ready as a enterprise product.
So those are not Silicon Valley startups on AWS since 2006, for many decision makers in those big corporations the GDPR upcoming problem predate the cloud solution.
GDPR applies to data in cloud too.
“Where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.”
Do European politicians understand that those laws are usually dead? There is no way a law like that can be enforced except by large companies.
Also, this kind of laws would make Europeans to stay at the loser side of the AI competition as China and mostly every US corporation doesn't care about that.
> Also, this kind of laws would make Europeans to stay at the loser side of the AI competition as China and mostly every US corporation doesn't care about that.
Not sure that's a game I want to win.
The law will only ensure good companies like MistralAI or Black Forest Labs will stay in the shadow.
This is same idiocy like the Republican senator who wants to prohibit Deepseek usage in US.
About legality, what's the illegal thing AI shouldn't do? Many of that knowledge can be accessible already from books, even how to build weapons or explosives
The banned use cases are very specific and concerns systems explicitly designed for such dystopian shit. AI giving advice how to build weapons or explosives is not banned here. The "unacceptable risk" category does not concern companies like MistralAI or Black Forest Labs. This is not the same idiocy.
For instance, discussing or questioning Nazism is illegal in Germany but allowed in many other countries. Should every LLMs be restricted globally just because Germany deems it illegal?
Similarly, certain drugs are legal in the Netherlands but illegal in other countries, sometimes even punishable by death. How do you handle such discrepancies?
Let's face it: most of the time, LLMs follow US-centric anti-racism guidelines, which aren't as prominent or necessary in many parts of the world. Many countries have diverse populations without significant racial tensions like United States and don't prioritize African, Asian, or Latino positivity to the same extent.
Moreover, in the US, discussions about the First or Second Amendment are common, even among those with opposing views, but free speech and gun rights are taboo in other societies. How do you reconcile this?
In practical terms, if an LLM refuses to answer questions because they're illegal in some countries, users will likely use uncensored models instead, rendering the restricted ones less useful. This is why censorship is never successful except by North Korea and China.
Take Stable Diffusion as an example: the most popular versions (1.5, XL, Pony) are flexible for unrestricted use, whereas intentionally censored versions (like 2.1 or 3.0) have seen limited adoption.
I, for one, welcome our Chinese communist overlords.
A vibrant tech ecosystem is a large part of the reason for both.
Other fields have very similar laws in the EU and there's lots of tiny companies able to comply with those. The risk control required by this law is the same that's required by so many other EU laws. Most companies that make high-risk products have no problem at all implementing that.
There are plenty of companies in the EU using and developing AI even with the fact that Americans say we have "heavy regulation", it isn't just in the same ballpark as the US and China, which both have much bigger potential markets and a stronger VC base with of course, more money.
The lack of regulations from the US in AI creates a very harsh atmosphere for the population.
It's so naive to think that Meta/Google(Youtube) doesn't have power to manipulate people's opinion by showing content based on their algorithms. That's all manipulation through the use of AI.
They are thinking for you. Making you depressed, making you buy useless stuff.
Look on research on this subject and you will be surprised how much the likes of Meta and Google are getting away with.
Hope to see more EU fines for American Big Tech firms using AI to abuse people's weaknesses.
We have that here too, except in our case it’s the government using the good old fashioned medium of television.
Can we use this to ban X and other American algorithmic political propaganda tools?
I am p sure that they made like major major changes after they code dumped it. Considering "verified boosts" and "elon boosts" are very noticable, with the first being an confirmed "feature", I doubt the algorithm would even remotely work with data from nowadays.
Anyway, what I wanna say is that the last commit was over 2 years ago.
…
what follows is a list of some pretty nasty and insidious use cases.
it’s not “AI is completely banned”, it’s “consider the use cases you are working on responsibly”. only for those specific use cases, mind you.
for all other use cases not in the list, which is a significantly larger subset of development, just ensure you do the required safety/regulatory sign off work.
just like when we get our SaaS webapps evaluated for compliance to security standards, its just a box ticking exercise for the most part.
When I talk to ChatGPT advanced voice mode with a happy and upbeat tone, it replies similarly. If I talk to it in a lot neutral way it does adapt. The AI thus infers my emotions. I use ChatGPT at work, my company pays for it.
Sounds like I should sue.
Also, I am trying to implement a new policy for pull request in my tech team. We send an anonymous form to gather feedback. I sent all the responses in one block to ChatGPT and asked it to summarize the feedback. The AI indicated that “generally people seem pretty happy about the new policy”. Should I go to jail now for being clearly a deranged madmen according to the EU ?
> the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions
emphasis mine.
chatgpt was not specifically put into service to monitor your emotions at work.
so it’s fine i’d say. and your pull request thing is fine.
also, you’re not trying to infer the emotions of any specific natural person. you’re trying to guage satisfaction of a process. that’s different to working out whether someone is feeling sad or feeling lonely in the workplace because they “aren’t smiling enough”.
unfortunately that’s means you can’t sue and get a pay day.
edit — i find it kind of funny that people are knee jerk reacting emotionally to this. kind of ironic when you consider the example at hand.
It depends highly on not only how its written but the spirit of what the EU is attempting to do. The knee jerk reaction is probably that historically institutions do a terrible job of writing rules and especially rules around new technology.
So basically I just need to second guess everything I do, until someone somewhere gets sued and loses and another dude gets sued and loses. At that point we will have some idea about what the law really entails (at which point they change it and the cycle restarts)
In the meantime, my US competitors are just moving full steam ahead.
A quick search reveals DeepMind, Skype, SwiftKey, Shazam, Moodstocks for the former. Bit of overlap with the latter, too, as e.g. AlphaFold is from DeepMind after getting bought.
Quick look on the Apple App store also gets me Komoot (Germany), Trade Republic (Germany), Revolut (UK), Babbel (Germany).
Aside from them, ETH Zürich and CERN are doing pretty good work, too, the latter inventing the modern hypertext based web on which you are currently reading this.
Cambridge has some decent digital tech, also has Metalysis and The Welding Institute and was where the double helix structure of DNA were found and where Stephen Hawking chose to work.
1) why they were bought by the American companies
2) that having an American owner doesn't make them directly American or magically cause them to be in Silicon Valley
3) The country names I put in brackets
4) The location of Zürich and CERN
And instead want to focus on the fact that one specific example of a top ranked app is not all by itself an entire sector, while ignoring all the other examples *right next to it* or the fact that this was trivial to find.
To demonstrate why you're missing the wood for the trees, consider: I can accurately say "Facebook" isn't really all that important, it's just an advertising provider getting in the way of people trying to talk to each other — but that it isn't all of Silicon Valley all by itself doesn't mean its headquarters are not relevant as an example of "Silicon Valley".
It’s maddening that a group on non elected politicians and their friends have this kind of power and are using it to destroy Europe and our future.
Or is it something that is open to interpretation, let the courts sort of out and fine you 15,000,000 euros if someone in the EU has leverage on the courts and doesn't like you?
Oh and the courts will already kill any small startup.
> to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
more details in recital 42: https://artificialintelligenceact.eu/recital/42/
in addition, having front doors idk. calling the police on people just because it's a "unusual" situation would be quite dystopian and would most likely for society as a whole lead to far more damage then it would prevent so instead of your door trying to "detect maybe soon to happen crime" it could "try to detect unusual situation which might require human actions" and then have the human (you someone or in a call center if you aren't available) do the actions (which might be just one or two button presses, nothing prevents you to take the action by directing the AI to do it for you)
and lets not forget we are speaking about before the break in (and maybe no break in at all because it's actually a Halloween costume or similar), if the system detects someone braking in we have a action
Arguably, an AI security system with great objective understanding of the unfolding circumstances would be a lot better than one profiling people passing by and raising an alarm each time a person that looks a certain way walks by.
It’s just that simple CV-based classification, perhaps trained with unsupervised learning, is easier in AI than observing a chain of actions. The labelled data set is usually accessible from police orgs if you want to simply train an AI to look at people and judge them based on visual traits. By the EU saying “this easy way is not good enough”, it is encouraging technological development in a way. Develop a system that’s more objective than visual profiling, and the market is yours.
Until another braindead legislator finds another thing he can rally against and throws a stick between my legs.
There are reasons innovation happens in China and - to a lower extend - in the United States. This is one of them.