here's some thoughts on movement and relocation on computers, blessed by @luxpris beautiful gif https://networkcultures.org/entreprecariat/on-movement-and-relocation/
knowledge as know-how
Knowledge is always a know-how. This know-how, which might be implicit, is first codified and then automated: a technique becomes a commodity. Example: online search. What search engines commodify is not the information itself but the information retrieval process. The PageRank algorithm codifies the social practice of linking. The list of results is the commodity that derives from such codification process.
terminological conundrum, help needed!
How to call that messy assemblage around computers made of popular devices, semi-standardized interface layouts of apps and websites, widespread functional expectations, daily online habits, prevailing sentiment towards technology?
- mainstream computing
- the anti-Stallman
- the Techium
- the generalist software-hardware continuum
- normie computerdom
- platform consensus
- hegemonic computing
How to call contemporary computer monoculture? Here's some ideas: https://networkcultures.org/entreprecariat/the-user-condition-06-the-ithing/
Something I should have added in the conclusions is that *monoculture is plural*: monoculture is not Mac but Mac vs PC, or more recently Apple vs Huawei. Monoculture is in the intersection of cultures that attempt hegemony. Do we see a clash of cultures in this video? Yes, but we also see a monoculture. The dichotomy *is* the monoculture.
"People used to talk about the internet as a place. The information superhighway. A frontier. The internet was something to get on. Even the desktop metaphor was in turn clarifying, then confusing: it helped people understand how a personal computer organizes information, while it invited a user to think of the experience as three-dimensional and spatial. Now people talk about the internet as something to talk to; it is someone." Joanne McNeil, Lurking, 17
On externalized knowledge/know how: "When he was CEO, Eric Schmidt called multiple search results a 'bug'. Google 'should be able to give you the right answer just once. We should know what you meant.' When a Youtube video ends and autoplay selects another, that's Google attempt at a 'right' answer." Joanne McNeil, Lurking, p. 36
updated the list of names for computer monoculture with "impersonal computing", which I like https://networkcultures.org/entreprecariat/the-user-condition-06-the-ithing/
some user-related Benjamin Bratton's quotes from *The Stack*:
"In practice, the User is not a type of creature but a category of agents; it is a position within a system without which it has no role or essential identity." p. 251
"[The User's position] never allows someone to enter into it fully formed; it also forms that person (or thing) into shape as it provides them tactics for shifting systems and their apparatuses." p. 252
" […] the User is both an initiator and an outcome …" p. 253
"We, the actual consumers, are the shadows of the personified simulations of ourselves" p. 255
'But neural networks, and software in general, do not create new reality—they ingest data and reflect back a reality that is a regurgitation and reconfiguration of what they have already consumed. And this reality that these machines reflect back is slightly wrong. Recall the statistician’s aphorism “all models are wrong, but some are useful.” What happens when we rely on these models to produce new realities, and feed those slightly-wrong realities back into the machines again? What happens when we listen to Spotify’s Discover Weekly playlist week after week, “like” the posts that Facebook recommends to us, and scroll through TikTok after TikTok? I am guilty of all of these, and it would not be wrong to claim that my taste in music and sense of humor are mediated by this mutual recursion between the algorithms and the real world.' https://blog.jse.li/posts/software/
after many years infinite scroll might be coming to google search https://www.seroundtable.com/google-pagination-tests-dancing-google-logo-29578.html
here's an article from #npccafe, since it's related to this thread. The idea: Angry Birds stands for action, decision while Flappy Bird stands for repetitive behaviour, automatism. An excerpt:
"Angry Birds is the constellation of links punctuating a Wikipedia entry or the landing page of The Guardian or The New York Times. It is Open Street Maps or Google Earth. It is Radio.garden. In short, it stands for any territory that allows a destination, right or wrong. It is a multidimensional (though monodirectional) game. The user-player is here a user-navigator.
Flappy Bird is the bottomless feed of Facebook or Twitter, the chain of Instagram stories, the automatic playlist of Youtube and Netflix. It is all that goes by itself and therefore paralyses the user, making their intervention accidental or superfluous. In the inexhaustible feed, scrolling fells like a mechanical, rudimentary activity, ready to be automated, like turning the crank of a phonograph. You don’t navigate a feed, at best you unfold it. With stories and playlists, instead, full automation is finally achieved."
Some notes on Douglas Rushkoff re: techlash and net apology:
"Computers were the tools that would upscale humanity. But maybe humanity simply wasn’t developed enough to handle the abilities potentiated by digital technologies distributed so widely and rapidly. At least not the sector of humanity that ended up being responsible for developing this stuff." [weird take imo to see human in need of update to work with tech: belief in competence rather than community]
"No matter our current perceptions of our lowly place in the order of things, we are not still in the land of passive television consumption and limited knowledge, taking actions that somehow recede into the past and fade away. No matter how stupid and powerless we have been led to think of ourselves, we have at our fingertips — in our pockets, even — access to the near-totality of human knowledge and capacity." some hope]
"But looking back, I’m thinking the answer wouldn’t have been to talk less about the power and potential of the net, but more." [net maximalism]
"Whether you’re scrolling through your social media site of choice, such as Facebook, Twitter, Reddit, or Instagram, or simply engaging with all the bad news on your favorite news source’s website for long periods of time, doomscrolling isn’t platform specific. And its roots extend back past the internet to the rise of the 24-hour cable news cycles, where it first became possible to gorge on depressing news on an endless loop.
After being first mentioned on Twitter in 2018, the term doomscrolling has become an increasingly popular way to describe the obsessive perusal of social media or news that for many has been sparked by the fear and anxiety around the coronavirus. The word’s close cousin, “doom surfing,” dates back to the late 2000s, when it was used in reference to the game Dino Run (the term described the act of running next to the game’s “Wall of Doom”). In many ways, the concept of doomscrolling—which more specifically refers to scrolling on your phone—has become the word of the moment, at least according to Merriam-Webster, which featured both terms on its Words We’re Watching blog at the end of April."
"[…] il giocatore può articolare un progetto personale all’interno del mondo di gioco che può discostarsi dall’approccio strumentale che è implicitamente indicato dal gioco stesso (e che invita ad accumulare risorse e ottimizzare comportamenti, ricompensando il giocatore di conseguenza)."
"As smartphones blurred organizational boundaries of online and offline worlds, spatial metaphors lost favor. How could we talk about the internet as a place when we're checking it on the go, with mobile hardware offering turn-by-turn directions from a car cup holder or stuffed in a jacket pocket?"
McNeil, Lurking, p. 118-9
"In effetti, in una giornata intera non ho mai usato il portafoglio, la mail, un browser. Quando rientro in casa il mio computer, appoggiato sul tavolo in cucina, mi sembra ormai semplicemente una macchina da scrivere, ma meno rumorosa. […] Nel corso di tutta la mia giornata non sono mai uscito da WeChat. In Cina lo smartphone è WeChat. E WeChat sa tutto di ognuno di noi."
Red Mirror, Simone Pieranni, p. 5
"After Google simplified the search, each subsequent big breakthrough in net technology was something that decreased the technical know-how required for self- publishing (both globally and to friends). The stressful and confusing process of hosting, ftping, and permissions, has been erased bit by bit, paving the way for what we now call web 2.0." Cory Arcangel, 2009
I'm finally starting to bring together all the things I've been reading about the user condition and I'm gonna post WIP sections here. Here's the first one:
As humans, when we are born we are thrown into a world. This world is shaped by things made by other humans before us. These things are what relate and separate people at the same time. Not only we contemplate them, but we use these things and fabricate more of them. In this world of things, we labor, work and act. The labor we perform is a private process for subsistence that doesn't result in a lasting product. Through work, we fabricate durable things. And then we act: we do things that lead to new beginnings: we give birth, we engage with politics, we quit our job. One could arrange these activities according to a scale of behavior: labor is pure behavior, work can be seen as a modulation of behavior, action is *interruption* of behavior. Action is what breaks the “fateful automatism of sheer happening”. This is, in a nutshell, Hannah Arendt's depiction of the human condition. (-->)
Nowadays, the internet might feel less like a world, but it maintains the "wordly" feature of producing the more or less intelligible conditions of users. In fact, with and within networked computers, users perform all three kinds of activity identified by Arendt: they perform repetitive labor, they fabricate things, and, potentially, they act, which is, they produce new beginnings by escaping prescribed paths, by creating new ones, by not doing what they were expected to do or what they've always been doing before.
"To put it another way, the World Wide Computer (The Cloud, ndr), like any other electronic computer, is programmable. Anyone can write instructions to customize how it works, just as any programmer can write software to govern what a PC does. From the user’s perspective, programmability is the most important, the most revolutionary, aspect of utility computing. It’s what makes the World Wide Computer a personal computer—even more personal, in fact, than the PC on your desk or in your lap ever was." Nicholas Carr
more #WIP from The User Condition >>>
"Among the three types of activity identified by Hannah Arendt, action is the broadest, and the most vague: is taking a shortcut on the way to the supermarket a break from the "fateful automatism of sheer happening"? Does the freshly released operating system coincide with "a new beginning"? Hard to say. And yet, I find "action", with its negative anti-behavioral connotation, a more useful concept than the one generally used to characterize positively one's degree of autonomy: agency. Agency is meant to measure someone's or something's "capacity, condition, or state of acting or of exerting power". All good, but how do we measure this if not by assessing the very power of changing direction, of producing a fork in path. A planet that suddenly escapes its predetermined orbit would appear "agential" to us, or even endowed with intent. An action is basically a choice, and agency measures the capacity of making choices. No choice, on the contrary, is behavior. The addict has little agency because their choice to interrupt their toxic behavior exists, but is tremendously difficult. In short, agency is the capacity for action, which is in turn the ability to interrupt behavior. (1/2)
"Here's a platform-related example. We can postulate a shortage of user agency within most dominant social media. What limits the agency of a user, namely, their ability to stop using such platforms, is a combination of addictive techniques and societal pressures. It's hard to block the dopamine-induced automatism of scrolling, and maybe it's even harder to delete your account when all your friends and colleagues assume you have one. In this case, low agency takes the form of a lock-in. If agency means choice, the choice we can call authentic is *not* to be on Facebook (or WeChat, if you will).
While this is a pragmatic understanding of agency, we shouldn't forget that it is also a very reductive one: it doesn't take into account the clash of agencies at play in any system, both human and non-human ones." (2/2)
"But wait, isn't the keyboard the way to escape pre-programmed paths, as it enables the user to write code? Writing code is the deepest interaction possible on a computer!" @despens, 2009
more #WIP from The User Condition >>>
We call "user" the person who operates a computer. But is "use" the most fitting category to describe such an activity? Pretty generic, isn't it? New media theorist Lev Manovich briefly argued that "user" is just a convenient term to indicate someone who can be considered, depending on the specific occasion, a player, a gamer, a musician, etc. This terminological variety derives from the fact, originally stated by computer pioneer Alan Kay, that the computer is a *metamedium*, namely, a medium capable of simulate all other media. What else can we say about the user? In *The Interface Effect* Alexander Galloway points out *en passant* that one of the main software dichotomies is that of the user versus the programmer, the latter being the one who acts and the former being the one who's acted upon. For Olia Lialina, the user condition is a reminder of the presence of a system programmed by someone else. Benjamin Bratton clarifies: "in practice, the User is not a type of creature but a category of agents; it is a position within a system without which it has no role or essential identity […] the User is both an initiator and an outcome." (1/2)
Paul Dourish and Christine Satchell recognize that the user is a discursive formation aimed at articulating the relationship between humans and machines. However, they consider it too narrow, as interaction does not only include forms of use, but also forms of *non-use*, such as withdrawal, disinterest, boycott, resistance, etc. With our definition of agency in mind (the ability to interrupt behavior and break automatisms), we might come to a surprising conclusion: within a certain system, the non-user is the one who possesses maximum agency, more than the standard user, the power user, and maybe even more than the hacker. To a certain extent, this shouldn't disconcert us too much, as often with the ability to refuse concides with power. Often, the very possibility of breaking a behavior or not acquiring it in the first place, betrays a certain privilege. We can think, for instance, of Big Tech CEOS that fill the agenda of their kids with activities to keep them away from social media. (2/2)
more #WIP from The User Condition >
In her essay, Olia Lialina points out that the user preexisted computers as we understand them today. The user existed in the mind of people imagining what computational machines would look like and how they would relate to humans. These people were already consciously dealing with issues of agency, action and behavior. A distinction that can be mapped to notions of action and behavior has to do with creative and repetitive thought, the latter being prone to mechanization. Such distinction can be traced back to Vannevar Bush.
In 1960, J. R. Licklider, anticipating one of the cores of Ivan Illich's critique, noticed how often automation meant that people would be there to help the machine rather than be helped by it. A bureaucratic "substitution of ends" would take place. In fact, automation was and is often semi-automation, thus falling short of its goal. This semi-automated scenario merely produces a "mechanically extended men". The opposite model is what Licklider called "Man-Computer Symbiosis", a truly "cooperative interaction between men and electronic computers". The Mechanically Extended Man is a behaviorist model because decisions, which precede actions, are taken by the machine. Man-Computer Symbiosis is a bit more complicated: agency seems to reside in the evolving feedback loop between user and computer. Human-computer Symbiosis would "enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs". Behavior, understood here as clerical, routinizable work would be left to computers, while creative activity, which implies various levels of decision making, would be the domain of both.
Alan Kay's pioneering work on interfaces was guided by the idea that the computer should be a medium rather than a vehicle, its function not pre-established (like that of the car or the television) but reformulable by the user (like in the case of paper and clay). For Kay, the computer had to be a general-purpose device. He also elaborated a notion of computer literacy which would include the ability to read the contents a medium (the tools and materials generated by others) but also the ability to write in a medium. Writing on the computer medium would not only include the production of materials, but also of tools. That is for Kay authentic computer literacy: "In print writing, the tools you generate are rhetorical; they demonstrate and convince. In computer writing, the tools you generate are processes; they simulate and *decide*."
More recently, Shan Carter and Michael Nielsen introduced the concept of "artificial intelligence augmentation", namely, the use of AI systems to augment intelligence. Instead of limiting the use of AI to "cognitive outsourcing" (_AI as an oracle, able to solve some large class of problems with better-than-human performance_), AI would be a tool for "cognitive transformation" (_changing the operations and representations we use to think_).
Through the decades, user agency meant freedom from predetermined behavior, ability to program the machine instead of programming it, decision making, cooperation, break from repetition, functional autonomy. This values and the concerns deriving from their limitation were already present since the inception of the science that propelled the development of computers. One of most present fears of Norbert Wiener, the founding father of cybernetics, was fascism. With this word he didn't refer to the charismatic type of power in place during historical dictatorships. He meant something more subtle and encompassing. For Wiener, fascism meant "the inhuman use of human beings", a predetermined world, a world without choice, a world without agency. Here's how he describes it in 1950:
> In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.
#WIP from The User Condition essay
In the 80's, Apple came up with a cheery, Coca Cola-like [ad](https://www.youtube.com/watch?v=JLXjfhtgtfw) with people of all ages from all around the world using their machine for the most different purposes. The commercial ended with a promising slogan: "the most personal computer". A few decades afterwards, Alan Kay, who was among the first to envision computers as personal devices<!-- https://history-computer.com/Library/Kay72.pdf -->, was not impressed with the state of computers in general, and with those of Apple in particular.
For Kay, a truly personal computer would encourage full read-write literacy.Through the decades, however Apple seemed to go in a different direction: cultivating an allure around computers as lifestyle accessories, like a pair of sneakers. In a sense, it fulfilled more than other companies the consumers' urge to individualize themselves. Let's not, though, look down on the accessory value of a device and the sense of belonging it creates. It should be enough to go to any hackerspace to recognise a similar logic, but with a Lenovo (or more recently a Dell) in place of a Mac.
#WIP from The User Condition essay
And yet, Apple's computer-as-accessory actively reduced read-write literacy. Apple placed creativity and "genius" at the surface of preconfigured software. Using Kay's terminology, Apple's creativity was relegated to the production of materials: a song composed in GarageBand, a funny effect applied to a selfie with Photo Booth. What kind of computer literacy is this? Counterintuitively, what is a form of writing within a software *vehicle*, is often a form of reading the computer *medium*. We only write the computer medium when we do not simply generate materials, but tools. A term coined by Robert Pfaller in a different context seems to fit here: *interpassivity*. Don't get me wrong, not all medium writing needs to happen on an old-style terminal, without the aid of a graphical interface. Writing the computer medium is also designing a macro in Excel or assembling an animation in Scratch.
#WIP from The User Condition essay
Then, the new millennium came and mobile devices with it. At this point, the hiatus between reading and writing grew dramatically. In 2007 the iPhone was released. In 2010, The iPad was launched. Its main features didn't just have to to do with not writing the computer medium, but not writing at all: among them, browsing the web, watching videos, listening to music, playing games, reading ebooks. The hard keyboard, the "way to escape pre-programmed paths" according to Dragan Espenschied, disappeared from smartphones. Devices had to be jailbroken. Software was compartimentalized into apps. Screens became small and interfaces lost their complexities to fit them. A "rule of thumb" was established. Paraphrasing Kay again, simple things didn't stay simple, complex things became less possible.
@entreprecariat this is an incredible synthesis you've written up. what an amazing, powerful, & compelling quick capture of so so much. 🙇
@entreprecariat massive notes of Ursala Franklin's Holistic vs Prescriptive Technology distinction going down.
This particular critique is interesting in that it talks a lot of to the upkeep of Prescriptive / mechancistic technologies, which is an interesting layer, whereas often Prescriptive technology talks to the dehumanization of mankind itself being processed.
Licklider's Man-Computer Symbiosis speaks quite directly otoh to Holistic Technology, towards Engelbartian Augmenting the Intellect & other creativities via symbiotic growth & extension.
@jauntywunderkind420 I love this! I've been trying to expand the scope of these concerns and I'm glad to incorporate the thinking of not just computer pioneers. Extremely useful. Thanks!
@entreprecariat if sharing Ursala Franklin around a little more is all my life ends up being for, it will still have been a life with some very solid checks in the + column. a core dichotomy for our times, a measure of what kind of a humanity we are becoming.
@entreprecariat Think I found the source for this (according to Tineye anyway): https://markvomit.tumblr.com/post/631549496902451200/mark-vomit-2020
@entreprecariat Thanks to you, loved that image too, and I still need to wrap my head around your work which seems very very interesting!
We are an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.