S. Wolfram, here.
Friday, February 17, 2023
Thursday, February 16, 2023
Wednesday, February 15, 2023
Monday, February 13, 2023
DMA stakeholder workshop “Interoperability between messaging services”
Agenda, 27 February 2023 here (looking forward to it :)).
Saturday, February 11, 2023
Thursday, February 09, 2023
“Where are we heading with digital identities?”
EDPS, here.
[Asked ChatGPT to summarize the speech:
"The article is about the European Digital Identity Wallet framework (EDIF) proposed by the revision of the eIDAS, which aims to provide citizens with a secure and privacy-friendly digital identity. The writer, who represents data protection, believes that the EDIF has the potential to be a big success and that the relationship between EDIF and the GDPR can be examined under two different aspects. On one hand, if implemented correctly, it can effectively protect individuals and their data and comply with essential provisions of the GDPR, such as data accuracy and security of processing. On the other hand, the data processing activities related to digital identity must comply with the GDPR, particularly with data minimization and protection by design and by default. The writer expresses concern about the 28 Implementing Acts in the proposal, which will decide on the actual implementation aspects, and questions the basis for these acts and whether there are mandatory certification mechanisms in place to ensure compliance with the GDPR"]
Can you imagine your Students using ChatGPT to summarize the reading assignments instead of actually studying them? I can.
Remarks and Regrets
FTC, here.
"I don’t mean to criticize the FTC in pointing this out—I think former Commissioner Jones Harbour was incredibly visionary and ahead of her time. The FTC was just being faithful to our existing practices in merger review. But clearly there was something going on that we did not catch, given what we know about the ad tech market today. So I think it’s a real lesson learned that shows that we antitrust enforcers need to continually re-examine and fine tune our approach. The goal of our merger guideline revision project is to do just that and enable us to more effectively use our tools so we can make better predictions about the future effects of proposed mergers."
ChatGPT is a data privacy nightmare, and we ought to be concerned
ArsTechnica, here.
ChatGPT itself on the same topic:
As we live in a world where technology is rapidly advancing, artificial intelligence and machine learning models have become an integral part of our lives. One of the most popular and powerful language models developed by OpenAI is ChatGPT. It is a state-of-the-art model that has been trained on a vast amount of data and can generate human-like responses to various questions. However, while ChatGPT has many benefits, it also raises serious concerns about privacy.
First and foremost, ChatGPT is trained on a massive amount of data, including text from various sources such as books, articles, and even conversations. This data is not necessarily public or anonymous, which means that it can contain sensitive information about individuals and organizations. This sensitive information can be used for malicious purposes, such as identity theft, financial fraud, or even cyberbullying.
Furthermore, the nature of language models like ChatGPT means that it can store vast amounts of data in its memory. This data can be accessed and used by anyone who has access to the model, and there is no guarantee that it will not be misused. In fact, there have been instances where language models have been used for unethical purposes, such as generating fake news or creating biased content.
Moreover, the use of ChatGPT raises serious ethical concerns about the right to privacy. As individuals and organizations continue to share more and more personal information online, they are putting their privacy at risk. If this data falls into the hands of malicious actors, it can be used to target individuals and organizations with malicious campaigns. Furthermore, there is no way for individuals and organizations to know how their data is being used by these models, and there is no way for them to opt out of this process.
In conclusion, while ChatGPT is a powerful tool that has many benefits, it also raises serious concerns about privacy. As the use of AI and machine learning models continues to grow, it is imperative that we take the necessary steps to protect individuals' and organizations' sensitive information. This includes ensuring that these models are trained on anonymous and public data, as well as implementing strict privacy policies to prevent the misuse of sensitive information. We must also consider the ethical implications of using these models, and work towards finding a balance between their benefits and the risks they pose to privacy.
When asked about ChatGPT's compliance with the GDPR:
The General Data Protection Regulation (GDPR) is
a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). The regulation sets out the rules for how companies and organizations must handle personal data and ensure that the privacy of individuals is protected.
It is difficult to say definitively whether ChatGPT is compliant with the GDPR without a thorough review of how the model is used and how personal data is collected, processed, and stored. However, here are some considerations:
- Purpose Limitation: The GDPR requires that
personal data be collected for specified, explicit, and legitimate
purposes. If ChatGPT is used for purposes that are not compliant with the
GDPR, it may not be in compliance with the regulation.
- Data Minimization: The GDPR requires that
personal data be limited to what is necessary for the intended purpose. If
ChatGPT collects more data than is necessary, it may not be in compliance
with the regulation.
- Transparency: The GDPR requires that
individuals be informed about how their personal data is being used. If
ChatGPT is not transparent about how personal data is collected,
processed, and stored, it may not be in compliance with the regulation.
- Data Security: The GDPR requires that
personal data be securely processed and stored. If ChatGPT does not have
appropriate security measures in place, it may not be in compliance with
the regulation.
Overall, it is the
responsibility of the entity using ChatGPT to ensure that the model is used in
a way that is compliant with the GDPR. This includes conducting a data
protection impact assessment, implementing appropriate technical and
organizational measures to secure personal data, and providing individuals with
clear information about how their personal data is being used.
There was a sudden system error, and then I asked again (exactly the same prompt) - getting a different answer:
As an AI model developed by OpenAI, ChatGPT is not subject to the General Data Protection Regulation (GDPR), which is a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA).
However, OpenAI takes data privacy and security seriously and has implemented measures to ensure that personal data is processed in accordance with applicable privacy laws, including the GDPR. If you are using ChatGPT in the EU or the EEA, it is the responsibility of the entity that controls the use of ChatGPT to comply with the GDPR and other applicable privacy laws.
If you have concerns about the use of ChatGPT and data privacy, I would recommend contacting OpenAI directly for more information.
What happened?
Tuesday, February 07, 2023
Sunday, February 05, 2023
Friday, February 03, 2023
Thursday, February 02, 2023
Wednesday, February 01, 2023
A Case for Openness – Book Publishing and the Role of Amazon
A. Kreutzmann-Gallash, S. Schroff, here.
-
Panel, Programme here , Video here .
-
LG Frankfurt am Main, 2-06 O 172/09 (verkündet am 13.05.2009). Lesenswertes aus der Begründung (meine Hervorhebungen): "Vorstellbare ...
-
IPOS, here . Circular 3/18 here .
-
FTC Hearings, Video here .
-
M. Sheehan, here .