Thursday, February 09, 2023

MISE EN ŒUVRE DU RÈGLEMENT SUR LA GOUVERNANCE DES DONNÉES (DATA GOVERNANCE ACT): LES ENJEUX ÉCONOMIQUES

 CNIL, ici

The Financial Services Sector’s Adoption of Cloud Services

 U.S. Department of the Treasury, here

Exclusive Interview: OpenAI’s Sam Altman Talks ChatGPT And How Artificial General Intelligence Can ‘Break Capitalism’

 Forbes, here

de Streel on the DMA

Chez Oles, Video here.  

Microsoft/Activision Blizzard

 CMA, Provisional findings report, here

Italy bans U.S.-based AI chatbot Replika from using personal data

 Reuters, here

“Where are we heading with digital identities?”

 EDPS, here

[Asked ChatGPT to summarize the speech: 

"The article is about the European Digital Identity Wallet framework (EDIF) proposed by the revision of the eIDAS, which aims to provide citizens with a secure and privacy-friendly digital identity. The writer, who represents data protection, believes that the EDIF has the potential to be a big success and that the relationship between EDIF and the GDPR can be examined under two different aspects. On one hand, if implemented correctly, it can effectively protect individuals and their data and comply with essential provisions of the GDPR, such as data accuracy and security of processing. On the other hand, the data processing activities related to digital identity must comply with the GDPR, particularly with data minimization and protection by design and by default. The writer expresses concern about the 28 Implementing Acts in the proposal, which will decide on the actual implementation aspects, and questions the basis for these acts and whether there are mandatory certification mechanisms in place to ensure compliance with the GDPR"]

Can you imagine your Students using ChatGPT to summarize the reading assignments instead of actually studying them? I can.

Psychologists to help the FTC evaluate alleged mental health harms

M. Bedoya, here.  

Remarks and Regrets

 FTC, here

"I don’t mean to criticize the FTC in pointing this out—I think former Commissioner Jones Harbour was incredibly visionary and ahead of her time. The FTC was just being faithful to our existing practices in merger review. But clearly there was something going on that we did not catch, given what we know about the ad tech market today. So I think it’s a real lesson learned that shows that we antitrust enforcers need to continually re-examine and fine tune our approach. The goal of our merger guideline revision project is to do just that and enable us to more effectively use our tools so we can make better predictions about the future effects of proposed mergers."

ChatGPT is a data privacy nightmare, and we ought to be concerned

 ArsTechnica, here

ChatGPT itself on the same topic:

As we live in a world where technology is rapidly advancing, artificial intelligence and machine learning models have become an integral part of our lives. One of the most popular and powerful language models developed by OpenAI is ChatGPT. It is a state-of-the-art model that has been trained on a vast amount of data and can generate human-like responses to various questions. However, while ChatGPT has many benefits, it also raises serious concerns about privacy.

First and foremost, ChatGPT is trained on a massive amount of data, including text from various sources such as books, articles, and even conversations. This data is not necessarily public or anonymous, which means that it can contain sensitive information about individuals and organizations. This sensitive information can be used for malicious purposes, such as identity theft, financial fraud, or even cyberbullying.

Furthermore, the nature of language models like ChatGPT means that it can store vast amounts of data in its memory. This data can be accessed and used by anyone who has access to the model, and there is no guarantee that it will not be misused. In fact, there have been instances where language models have been used for unethical purposes, such as generating fake news or creating biased content.

Moreover, the use of ChatGPT raises serious ethical concerns about the right to privacy. As individuals and organizations continue to share more and more personal information online, they are putting their privacy at risk. If this data falls into the hands of malicious actors, it can be used to target individuals and organizations with malicious campaigns. Furthermore, there is no way for individuals and organizations to know how their data is being used by these models, and there is no way for them to opt out of this process.

In conclusion, while ChatGPT is a powerful tool that has many benefits, it also raises serious concerns about privacy. As the use of AI and machine learning models continues to grow, it is imperative that we take the necessary steps to protect individuals' and organizations' sensitive information. This includes ensuring that these models are trained on anonymous and public data, as well as implementing strict privacy policies to prevent the misuse of sensitive information. We must also consider the ethical implications of using these models, and work towards finding a balance between their benefits and the risks they pose to privacy.

When asked about ChatGPT's compliance with the GDPR:

The General Data Protection Regulation (GDPR) is

a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). The regulation sets out the rules for how companies and organizations must handle personal data and ensure that the privacy of individuals is protected.

It is difficult to say definitively whether ChatGPT is compliant with the GDPR without a thorough review of how the model is used and how personal data is collected, processed, and stored. However, here are some considerations:

  1. Purpose Limitation: The GDPR requires that personal data be collected for specified, explicit, and legitimate purposes. If ChatGPT is used for purposes that are not compliant with the GDPR, it may not be in compliance with the regulation.
  2. Data Minimization: The GDPR requires that personal data be limited to what is necessary for the intended purpose. If ChatGPT collects more data than is necessary, it may not be in compliance with the regulation.
  3. Transparency: The GDPR requires that individuals be informed about how their personal data is being used. If ChatGPT is not transparent about how personal data is collected, processed, and stored, it may not be in compliance with the regulation.
  4. Data Security: The GDPR requires that personal data be securely processed and stored. If ChatGPT does not have appropriate security measures in place, it may not be in compliance with the regulation.

Overall, it is the responsibility of the entity using ChatGPT to ensure that the model is used in a way that is compliant with the GDPR. This includes conducting a data protection impact assessment, implementing appropriate technical and organizational measures to secure personal data, and providing individuals with clear information about how their personal data is being used.

There was a sudden system error, and then I asked again (exactly the same prompt) - getting a different answer:

As an AI model developed by OpenAI, ChatGPT is not subject to the General Data Protection Regulation (GDPR), which is a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA).

However, OpenAI takes data privacy and security seriously and has implemented measures to ensure that personal data is processed in accordance with applicable privacy laws, including the GDPR. If you are using ChatGPT in the EU or the EEA, it is the responsibility of the entity that controls the use of ChatGPT to comply with the GDPR and other applicable privacy laws.

If you have concerns about the use of ChatGPT and data privacy, I would recommend contacting OpenAI directly for more information.

What happened?


Study on the provision of information to consumers about the processing of vehicle-generated data

 EC, here

Internet Centralization: What Can Standards Do?

 IETF, here

Banning Noncompetes Is Good for Innovation

 M. Lemley, O. Lobel, here

Market Study Report on Mobile OS and Mobile App Distribution

 Japan FTC, here

Wednesday, February 01, 2023

EU Chief Technologist Wanted

 Here.

Big Data and Artificial Intelligence in Digital Finance

 J. Soldatos, D. Kyriazis, here

Digital Services Act: Commission provides guidance for online platforms and search engines on publication of user numbers in the EU

 Here.

DOJ v. Google

Lost debate, with D. Dayen (from 3:23), here

Meta Wins Court Nod to Buy Virtual Reality Startup in Loss for Khan’s FTC

 Bloomberg, here

COMPETITION IN THE MOBILE APPLICATION ECOSYSTEM

 US Department of Commerce, here

EU studying whether Big Tech should pay network costs -EU document

 MarketScreener, here

Do DMA obligations for gatekeepers create entitlements for business users?

 O. Andriychuk, here

Is the DMA the real AI regulation?

 The Tech Brief, here

DOJ Suit to Break Up Google Was Years in the Making for Antitrust Chief

 WSJ, here (and here). 

A discussion with James Bessen about his book "The New Goliaths"

 YouTube, here

Cutting Through the Jargon - Independent Audits in the Digital Services Act

 Mozilla, here

Deploying Network Analysis in Antitrust Law (Publications)

 Adlc, here

Study on the impact of recent developments in digital advertising on privacy, publishers and advertisers

 Report for the EC, here

A Case for Openness – Book Publishing and the Role of Amazon

 A. Kreutzmann-Gallash, S. Schroff, here