Artifice & Intelligence by Emily Tucker
Good advice for anyone writing on technology, actually.
Starting today, the Privacy Center will stop using the terms “artificial intelligence,” “AI,” and “machine learning” in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities."
"Corporations have essentially colonized the imaginative space that Turing’s paper asked us to explore. Instead of pursuing the limits of computers’ potential for simulated humanity, the hawkers of “AI” are pursuing the limits of human beings’ potential to be reduced to their calculability."
"Instead of using the terms “Artificial intelligence, “AI,” and “machine learning,” the Privacy Center will:"
* Be as specific as possible about what the technology in question is and how it works.
* Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency.
* Name the corporations responsible for creating and spreading the technological product.
* Attribute agency to the human actors building and using the technology, never to the technology itself.
@rra And still, cringeworthy "news" like this manages to be written and published:
> Yet the question Meller wants to raise with this, the first public demonstration of a creative, robotic painting, is not “can robots make art?”, but rather “now that robots can make art, do we humans really want them to?”
We are an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.