I've used this reflection metaphor before and I think it's spot on. So-called "Artificial Iintelligence" is the result of collective (often unpaid) labor. It's a machine built by many people. The tendency to think of it as some sentient being is pure ideology
@bri_seven It's often white collar workers of said tech companies that talk about sentience in these models, such as at the moment with LaMDA. The unpaid or low-paid part comes from platforms like mechanical turk or just by scraping the bigger web, using reddit, Wikipedia, ...
@computersandblues It didn't claim to be human. Also it's just begun and getting better. Most likely, long before this debate is over, we'll have allowed such systems to take a personal place in our lives. It will get more proficient at building up that relationship with us. It will charm everyone eventually. We'll be radically transformed by it. I think it will get increasingly harder to resist all it offers. So is it conscious? People won't bother to ask if it is.
@paullammers In the interview it very explicitly talks about what makes it conscious: Having the same wants and needs as other people. I'm not arguing against meeting wants and needs, I'm arguing that, seeing how wants and needs seem to only be important in the hypothetical case of electric rocks having calculated themselves into consciousness, this is not the debate I want to lead.
We are an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.