Show older

"Think about what this means in the context of say, a Mac, an iPhone, an iPad. They aren’t full-fledged users. They’re just television watchers of different kinds." Alan Kay

some user-related Benjamin Bratton's quotes from *The Stack*:

"In practice, the User is not a type of creature but a category of agents; it is a position within a system without which it has no role or essential identity." p. 251

"[The User's position] never allows someone to enter into it fully formed; it also forms that person (or thing) into shape as it provides them tactics for shifting systems and their apparatuses." p. 252

" […] the User is both an initiator and an outcome …" p. 253

"We, the actual consumers, are the shadows of the personified simulations of ourselves" p. 255

'But neural networks, and software in general, do not create new reality—they ingest data and reflect back a reality that is a regurgitation and reconfiguration of what they have already consumed. And this reality that these machines reflect back is slightly wrong. Recall the statistician’s aphorism “all models are wrong, but some are useful.” What happens when we rely on these models to produce new realities, and feed those slightly-wrong realities back into the machines again? What happens when we listen to Spotify’s Discover Weekly playlist week after week, “like” the posts that Facebook recommends to us, and scroll through TikTok after TikTok? I am guilty of all of these, and it would not be wrong to claim that my taste in music and sense of humor are mediated by this mutual recursion between the algorithms and the real world.' blog.jse.li/posts/software/

more Bratton (p. 257) on The User:

"The more salient design problem seems less to design for *Users*, as if they were stable forms to be known and served, than to *design and redesign the User itself* in the image of whatever program might enroll it."

here's an article from , since it's related to this thread. The idea: Angry Birds stands for action, decision while Flappy Bird stands for repetitive behaviour, automatism. An excerpt:

"Angry Birds is the constellation of links punctuating a Wikipedia entry or the landing page of The Guardian or The New York Times. It is Open Street Maps or Google Earth. It is Radio.garden. In short, it stands for any territory that allows a destination, right or wrong. It is a multidimensional (though monodirectional) game. The user-player is here a user-navigator.

Flappy Bird is the bottomless feed of Facebook or Twitter, the chain of Instagram stories, the automatic playlist of Youtube and Netflix. It is all that goes by itself and therefore paralyses the user, making their intervention accidental or superfluous. In the inexhaustible feed, scrolling fells like a mechanical, rudimentary activity, ready to be automated, like turning the crank of a phonograph. You don’t navigate a feed, at best you unfold it. With stories and playlists, instead, full automation is finally achieved."

npc.cafe/angry-birds-vs-flappy

Some notes on Douglas Rushkoff re: techlash and net apology:

"Computers were the tools that would upscale humanity. But maybe humanity simply wasn’t developed enough to handle the abilities potentiated by digital technologies distributed so widely and rapidly. At least not the sector of humanity that ended up being responsible for developing this stuff." [weird take imo to see human in need of update to work with tech: belief in competence rather than community]

"No matter our current perceptions of our lowly place in the order of things, we are not still in the land of passive television consumption and limited knowledge, taking actions that somehow recede into the past and fade away. No matter how stupid and powerless we have been led to think of ourselves, we have at our fingertips — in our pockets, even — access to the near-totality of human knowledge and capacity." some hope]

"But looking back, I’m thinking the answer wouldn’t have been to talk less about the power and potential of the net, but more." [net maximalism]

medium.com/team-human/was-huma

on doomscrolling:

"Whether you’re scrolling through your social media site of choice, such as Facebook, Twitter, Reddit, or Instagram, or simply engaging with all the bad news on your favorite news source’s website for long periods of time, doomscrolling isn’t platform specific. And its roots extend back past the internet to the rise of the 24-hour cable news cycles, where it first became possible to gorge on depressing news on an endless loop.

After being first mentioned on Twitter in 2018, the term doomscrolling has become an increasingly popular way to describe the obsessive perusal of social media or news that for many has been sparked by the fear and anxiety around the coronavirus. The word’s close cousin, “doom surfing,” dates back to the late 2000s, when it was used in reference to the game Dino Run (the term described the act of running next to the game’s “Wall of Doom”). In many ways, the concept of doomscrolling—which more specifically refers to scrolling on your phone—has become the word of the moment, at least according to Merriam-Webster, which featured both terms on its Words We’re Watching blog at the end of April."

fastcompany.com/90514867/dooms

"[…] il giocatore può articolare un progetto personale all’interno del mondo di gioco che può discostarsi dall’approccio strumentale che è implicitamente indicato dal gioco stesso (e che invita ad accumulare risorse e ottimizzare comportamenti, ricompensando il giocatore di conseguenza)."

indiscreto.org/la-filosofia-di

"As smartphones blurred organizational boundaries of online and offline worlds, spatial metaphors lost favor. How could we talk about the internet as a place when we're checking it on the go, with mobile hardware offering turn-by-turn directions from a car cup holder or stuffed in a jacket pocket?"

McNeil, Lurking, p. 118-9

"Those with comfortable legacy media perches as staff writers and editors might have used the internet, but they did not identify as *users*—at least not in the way that someone posting a grievance online would be" McNeil, Lurking 156. From the excellent chapter on social media "outrage".

"In effetti, in una giornata intera non ho mai usato il portafoglio, la mail, un browser. Quando rientro in casa il mio computer, appoggiato sul tavolo in cucina, mi sembra ormai semplicemente una macchina da scrivere, ma meno rumorosa. […] Nel corso di tutta la mia giornata non sono mai uscito da WeChat. In Cina lo smartphone è WeChat. E WeChat sa tutto di ognuno di noi."

Red Mirror, Simone Pieranni, p. 5

possible definition of impersonal computing: computing devoid of an intimate savoir faire

and here's a tentative definition of netstalgia: nostalgia for a time of non-predetermined computer behavior

"After Google simplified the search, each subsequent big breakthrough in net technology was something that decreased the technical know-how required for self- publishing (both globally and to friends). The stressful and confusing process of hosting, ftping, and permissions, has been erased bit by bit, paving the way for what we now call web 2.0." Cory Arcangel, 2009

"Ten years ago every web site had a section of external links because people felt it was their personal responsibility to configure the environment and build the infrastructure." Olia Lialina, 2005

(not sure if was about a felt responsibility, but the result was indeed that of a reconfiguration)

I'm finally starting to bring together all the things I've been reading about the user condition and I'm gonna post WIP sections here. Here's the first one:

As humans, when we are born we are thrown into a world. This world is shaped by things made by other humans before us. These things are what relate and separate people at the same time. Not only we contemplate them, but we use these things and fabricate more of them. In this world of things, we labor, work and act. The labor we perform is a private process for subsistence that doesn't result in a lasting product. Through work, we fabricate durable things. And then we act: we do things that lead to new beginnings: we give birth, we engage with politics, we quit our job. One could arrange these activities according to a scale of behavior: labor is pure behavior, work can be seen as a modulation of behavior, action is *interruption* of behavior. Action is what breaks the “fateful automatism of sheer happening”. This is, in a nutshell, Hannah Arendt's depiction of the human condition. (-->)

Let's apply this model to computers. If an alternative world is anywhere to be found, that is inside the computer, since the computer has the ability not only, like other media, to represent things, but to *simulate* those things, and simulate other media as well. Joanne McNeil points out that "metaphors get clucky when we talk about the internet" because the internet, a network of networks of computers, is fundamentally manifold and diverse.<!-- 5 --> But for the sake of the argument, let's employ a metaphor anyway. Individual applications, websites, apps and online platforms are a bit like the things that populate a metropolis: whole neighborhoods, monuments, public squares, shopping malls, factories, offices, private studios, abandoned construction sites, workshops. This analogy emerged clearly at the inception of the web, but it became less evident after the spread of mobile devices. Again McNeil: "As smartphones blurred organizational boundaries of online and offline worlds, spatial metaphors lost favor. How could we talk about the internet as a place when we're checking it on the go, with mobile hardware offering turn-by-turn directions form a car cup holder or stuffed in a jacked pocket?"<!-- 118 -->. (-->)

Nowadays, the internet might feel less like a world, but it maintains the "wordly" feature of producing the more or less intelligible conditions of users. In fact, with and within networked computers, users perform all three kinds of activity identified by Arendt: they perform repetitive labor, they fabricate things, and, potentially, they act, which is, they produce new beginnings by escaping prescribed paths, by creating new ones, by not doing what they were expected to do or what they've always been doing before.

"To put it another way, the World Wide Computer (The Cloud, ndr), like any other electronic computer, is programmable. Anyone can write instructions to customize how it works, just as any programmer can write software to govern what a PC does. From the user’s perspective, programmability is the most important, the most revolutionary, aspect of utility computing. It’s what makes the World Wide Computer a personal computer—even more personal, in fact, than the PC on your desk or in your lap ever was." Nicholas Carr

most of our digital life happens in speedrun mode

the user and drug dealers quote is not just bad, it's false

more from The User Condition >>>

"Among the three types of activity identified by Hannah Arendt, action is the broadest, and the most vague: is taking a shortcut on the way to the supermarket a break from the "fateful automatism of sheer happening"? Does the freshly released operating system coincide with "a new beginning"? Hard to say. And yet, I find "action", with its negative anti-behavioral connotation, a more useful concept than the one generally used to characterize positively one's degree of autonomy: agency. Agency is meant to measure someone's or something's "capacity, condition, or state of acting or of exerting power". All good, but how do we measure this if not by assessing the very power of changing direction, of producing a fork in path. A planet that suddenly escapes its predetermined orbit would appear "agential" to us, or even endowed with intent. An action is basically a choice, and agency measures the capacity of making choices. No choice, on the contrary, is behavior. The addict has little agency because their choice to interrupt their toxic behavior exists, but is tremendously difficult. In short, agency is the capacity for action, which is in turn the ability to interrupt behavior. (1/2)

"Here's a platform-related example. We can postulate a shortage of user agency within most dominant social media. What limits the agency of a user, namely, their ability to stop using such platforms, is a combination of addictive techniques and societal pressures. It's hard to block the dopamine-induced automatism of scrolling, and maybe it's even harder to delete your account when all your friends and colleagues assume you have one. In this case, low agency takes the form of a lock-in. If agency means choice, the choice we can call authentic is *not* to be on Facebook (or WeChat, if you will).

While this is a pragmatic understanding of agency, we shouldn't forget that it is also a very reductive one: it doesn't take into account the clash of agencies at play in any system, both human and non-human ones." (2/2)

"But wait, isn't the keyboard the way to escape pre-programmed paths, as it enables the user to write code? Writing code is the deepest interaction possible on a computer!" @despens, 2009

"With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us." Zuboff, 2019

more from The User Condition >>>

We call "user" the person who operates a computer. But is "use" the most fitting category to describe such an activity? Pretty generic, isn't it? New media theorist Lev Manovich briefly argued that "user" is just a convenient term to indicate someone who can be considered, depending on the specific occasion, a player, a gamer, a musician, etc. This terminological variety derives from the fact, originally stated by computer pioneer Alan Kay, that the computer is a *metamedium*, namely, a medium capable of simulate all other media. What else can we say about the user? In *The Interface Effect* Alexander Galloway points out *en passant* that one of the main software dichotomies is that of the user versus the programmer, the latter being the one who acts and the former being the one who's acted upon. For Olia Lialina, the user condition is a reminder of the presence of a system programmed by someone else. Benjamin Bratton clarifies: "in practice, the User is not a type of creature but a category of agents; it is a position within a system without which it has no role or essential identity […] the User is both an initiator and an outcome." (1/2)

Paul Dourish and Christine Satchell recognize that the user is a discursive formation aimed at articulating the relationship between humans and machines. However, they consider it too narrow, as interaction does not only include forms of use, but also forms of *non-use*, such as withdrawal, disinterest, boycott, resistance, etc. With our definition of agency in mind (the ability to interrupt behavior and break automatisms), we might come to a surprising conclusion: within a certain system, the non-user is the one who possesses maximum agency, more than the standard user, the power user, and maybe even more than the hacker. To a certain extent, this shouldn't disconcert us too much, as often with the ability to refuse concides with power. Often, the very possibility of breaking a behavior or not acquiring it in the first place, betrays a certain privilege. We can think, for instance, of Big Tech CEOS that fill the agenda of their kids with activities to keep them away from social media. (2/2)

more from The User Condition >

In her essay, Olia Lialina points out that the user preexisted computers as we understand them today. The user existed in the mind of people imagining what computational machines would look like and how they would relate to humans. These people were already consciously dealing with issues of agency, action and behavior. A distinction that can be mapped to notions of action and behavior has to do with creative and repetitive thought, the latter being prone to mechanization. Such distinction can be traced back to Vannevar Bush.

In 1960, J. R. Licklider, anticipating one of the cores of Ivan Illich's critique, noticed how often automation meant that people would be there to help the machine rather than be helped by it. A bureaucratic "substitution of ends" would take place. In fact, automation was and is often semi-automation, thus falling short of its goal. This semi-automated scenario merely produces a "mechanically extended men". The opposite model is what Licklider called "Man-Computer Symbiosis", a truly "cooperative interaction between men and electronic computers". The Mechanically Extended Man is a behaviorist model because decisions, which precede actions, are taken by the machine. Man-Computer Symbiosis is a bit more complicated: agency seems to reside in the evolving feedback loop between user and computer. Human-computer Symbiosis would "enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs". Behavior, understood here as clerical, routinizable work would be left to computers, while creative activity, which implies various levels of decision making, would be the domain of both.

Alan Kay's pioneering work on interfaces was guided by the idea that the computer should be a medium rather than a vehicle, its function not pre-established (like that of the car or the television) but reformulable by the user (like in the case of paper and clay). For Kay, the computer had to be a general-purpose device. He also elaborated a notion of computer literacy which would include the ability to read the contents a medium (the tools and materials generated by others) but also the ability to write in a medium. Writing on the computer medium would not only include the production of materials, but also of tools. That is for Kay authentic computer literacy: "In print writing, the tools you generate are rhetorical; they demonstrate and convince. In computer writing, the tools you generate are processes; they simulate and *decide*."

More recently, Shan Carter and Michael Nielsen introduced the concept of "artificial intelligence augmentation", namely, the use of AI systems to augment intelligence. Instead of limiting the use of AI to "cognitive outsourcing" (_AI as an oracle, able to solve some large class of problems with better-than-human performance_), AI would be a tool for "cognitive transformation" (_changing the operations and representations we use to think_).

Through the decades, user agency meant freedom from predetermined behavior, ability to program the machine instead of programming it, decision making, cooperation, break from repetition, functional autonomy. This values and the concerns deriving from their limitation were already present since the inception of the science that propelled the development of computers. One of most present fears of Norbert Wiener, the founding father of cybernetics, was fascism. With this word he didn't refer to the charismatic type of power in place during historical dictatorships. He meant something more subtle and encompassing. For Wiener, fascism meant "the inhuman use of human beings", a predetermined world, a world without choice, a world without agency. Here's how he describes it in 1950:

> In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.

epiphany of the day: when we make a drawing in Illustrator, we are writing with the Illustrator vehicle, but we are merely *reading* the computer medium

Follow

#WIP from The User Condition essay 

In the 80's, Apple came up with a cheery, Coca Cola-like [ad](youtube.com/watch?v=JLXjfhtgtf) with people of all ages from all around the world using their machine for the most different purposes. The commercial ended with a promising slogan: "the most personal computer". A few decades afterwards, Alan Kay, who was among the first to envision computers as personal devices<!-- history-computer.com/Library/K -->, was not impressed with the state of computers in general, and with those of Apple in particular.

For Kay, a truly personal computer would encourage full read-write literacy.Through the decades, however Apple seemed to go in a different direction: cultivating an allure around computers as lifestyle accessories, like a pair of sneakers. In a sense, it fulfilled more than other companies the consumers' urge to individualize themselves. Let's not, though, look down on the accessory value of a device and the sense of belonging it creates. It should be enough to go to any hackerspace to recognise a similar logic, but with a Lenovo (or more recently a Dell) in place of a Mac.

· · Web · 1 · 0 · 0

#WIP from The User Condition essay 

And yet, Apple's computer-as-accessory actively reduced read-write literacy. Apple placed creativity and "genius" at the surface of preconfigured software. Using Kay's terminology, Apple's creativity was relegated to the production of materials: a song composed in GarageBand, a funny effect applied to a selfie with Photo Booth. What kind of computer literacy is this? Counterintuitively, what is a form of writing within a software *vehicle*, is often a form of reading the computer *medium*. We only write the computer medium when we do not simply generate materials, but tools. A term coined by Robert Pfaller in a different context seems to fit here: *interpassivity*. Don't get me wrong, not all medium writing needs to happen on an old-style terminal, without the aid of a graphical interface. Writing the computer medium is also designing a macro in Excel or assembling an animation in Scratch.

#WIP from The User Condition essay 

Then, the new millennium came and mobile devices with it. At this point, the hiatus between reading and writing grew dramatically. In 2007 the iPhone was released. In 2010, The iPad was launched. Its main features didn't just have to to do with not writing the computer medium, but not writing at all: among them, browsing the web, watching videos, listening to music, playing games, reading ebooks. The hard keyboard, the "way to escape pre-programmed paths" according to Dragan Espenschied, disappeared from smartphones. Devices had to be jailbroken. Software was compartimentalized into apps. Screens became small and interfaces lost their complexities to fit them. A "rule of thumb" was established. Paraphrasing Kay again, simple things didn't stay simple, complex things became less possible.

#WIP from The User Condition essay 

Google Images confirms that we are still anchored to pre-mobile idea of computers, a sort of skeuomorphism of the imagination. We think desktops or, at most, laptops. Instead, we should think of smartphones. In 2013, Michael J. Saylor [started noticing a shift](books.google.nl/books?id=8P4lD): “currently people ask, ‘Why do i need a tablet computer or an app-phone \[that’s how he calls a smartphone\] to access the Internet if I already own a much more powerful laptop computer?’. Before long the question will be, 'Why do I need a laptop computer if I have a mobile computer that I use in every aspects of my daily life?’” If we are to believe the [CNBC](cnbc.com/2019/01/24/smartphone), he was right. Their recent title was “Nearly three quarters of the world will use just their smartphones to access the internet by 2025”. Right now, in the US, a person is more likely to possess a mobile phone than a desktop computer (81% vs 74% in the US according to the [Pew Center](pewresearch.org/internet/fact-)) and I suspect that globally the discrepancy is higher. A person’s first encounter with a computer will soon be with a tablet or mobile phone rather than with a PC. And it's not just kids: the first computer my aunt used in her life is he smartphone. The PC world is turning into a Mobile-first world.

#WIP from The User Condition essay 

There is, finally, another way in which the personal in personal computers has mutated. The personal became personalized. In the past, the personal involved not just the possession of a device, but also one's own know-how, a *savoir faire* that a user developed for themselves. A basic example: organizing one's music collection. That rich, intricate system of directories and filenames each of us came individually up with. Such know-how, big or small, is what allows us to build, or more frequently rebuild, our little shelter within a computer. Our *home*. When the personal becomes personalized, the knowledge of the user's preferences and behavior is first registered by the system, and then made alien to the user themelves. There is one music collection and it's called iTunes, Spotify, YouTube Music. Oh, and it's also a mall where advertising forms the elevator music. Deprived of their savoir faire, the users receive an experience which is tailored on them, but they don't know exactly how. Why *exactly* is our social media timeline ordered? We don't know, but we know it's based on our prior behavior. Why exactly does the autosuggest present us with that very word? We assume it's a combination of factors, but we don't which ones.

#WIP from The User Condition essay 

Let's call it impersonal computing (or … <!-- footnote -->), shall we? Its features: computer accessorization at the expense of an authentic computer literacy, mobile-first asphyxia, dispossession of an intimate know-how. A know-how that, we must admit, is never fully annihilated: tactical techniques emerge in the cracks: small hacks, bugs that become features, eclectic worksflows. The everyday life of Lialina's Turing-Complete user is still rich. That said, we can't ignore the trend. In an age in which people are urged to "learn to code" for economic survival, computers are commonly used less as a medium than as a vehicle. The utopia of a classless computer world turned out to be exactly that, a utopia. There are users and there are coders.

"There are endless possibilities as to what a could be. What kind of room is a website? Or is a website more like a house? A boat? A cloud? A garden? A puddle? Whatever it is, there’s potential for a self-reflexive feedback loop: when you put energy into a website, in turn the website helps form your own identity." - Laurel Schwulst

thecreativeindependent.com/peo

"Google’s ideal society is a population of distant users, not a citizenry" Zuboff 2019

An all-inclusive computer literacy for the many was never a simple achievement. Alan Kay recognized this himself:

> The burden of system design and specification is transferred to the user. This approach will only work if we do a very careful and comprehensive job of providing a general medium of communication which will allow ordinary users to casually and easily describe their desires for a specific tool.

Maybe not many users felt like taking on such burden. Maybe it was simply too heavy. Or maybe, at a certain moment, the burden started to *look* heavier that it was. Users' desires weren't expressed by them with the computer medium. Instead, they were defined *apriori* within the controlled setting of interaction design: theoretical user journeys anticipated and construed user activity. In the name of user-friendliness, many learning curves were flattened.

Maybe that was users' true desire all along. Or, this is, at least, what computer scientist and entrepreneur Paul Graham thinks. In 2001, he recounted: "[…] near my house there is a car with a bumper sticker that reads 'death before inconvenience.' Most people, most of the time, will take whatever choice requires least work." He continues:

> When you own a desktop computer, you end up learning a lot more than you wanted to know about what's happening inside it. But more than half the households in the US own one. My mother has a computer that she uses for email and for keeping accounts. About a year ago she was alarmed to receive a letter from Apple, offering her a discount on a new version of the operating system. There's something wrong when a sixty-five year old woman who wants to use a computer for email and accounts has to think about installing new operating systems. Ordinary users shouldn't even know the words "operating system," much less "device driver" or "patch."

So who should know these words? "The kind of people who are good at that kind of thing" says Graham. His vision seems antipodal to Kay's one. Given a certain ageism permeating the quote, one is tempted to root for Kay without hesitation, and to frame Graham (who's currently 56) as someone who wants to prevent the informatic emancipation of his mother. But is it really the case? The answer depends on the cultural status we attribute to computers and the notion of autonomy we adopt.

We might say, with Graham, that his mother is made less autonomous by some technical requirements she doesn't need nor want to deal with. For her, having a computer functioning like a slightly smarter toaster is good enough. Most of the computer's technical complexity, together with its technical possibilities, are alien to her. They are a waste of time and source of worry to her, a burden. Moreover, in order to continue using her machine, she might be forced to familiarize with a new operating system.

On the other hand, we might say, keeping in mind Kay's vision, that the autonomy of Graham's mother was eroded "upstream", as she has been using the computer as an impersonal vehicle, unaware of its profound possibilities. If you believe that society at large is at a loss by using the computer as a smart toaster, than you're with Kay. If you think that is fair, then you're with Graham. But are this two views really in opposition?

Let's consider an actual smart toaster, one of those Internet-of-Things devices. Your smart knows the bread you want to toast and the time that it takes. But, one day, out of the blue, you can't toast your bread because you haven't updated the firmware. You couldn't care less of the firmware: you're starving. But you learn about it and update the device. Then, the smart toaster doesn't work anymore as it used to: settings and features have changed. What we witness here is a reduction of agency, as you can't interrupt the machine's update behavior. Instead, you have to modify your behavior to adapt to it. Back to Graham's mother: the know-how she laboriously acquired, the desire for a specific tool that she casually developed thorugh time, might be suddenly wiped out by a change she never asked for.

Alan Kay has a motto: "simple things should be simple, complex things should be possible". Above, we focused on complex things becoming less possible. But what about simple things? Often, they don't stay simple either. True, without read-write computer literacy a user is stuck in somewhat predetermined patterns of behavior, but the personal adoption of these patterns often forms a know-how. If that is the case, being able to stick to them can be seen as a form of agency. Interruption of behavior means aborting the update. :workstation:

The revolution of behavioral patterns is often sold in terms of convenience, namely less work. Less work means less decisions to make. Those decisions are not magically disappearing, but are simply delegated to an external entity that takes them automatically. In fact, we can define convenience as automated know-how or automated decision-making. We shouldn't consider this delegation of choice as something intrinsically bad, otherwise we would end up condemning the computer for its main feature: programmability. Instead, we should discern between two types of convenience: autonomous convenience and heteronomous convenience. In the former, the knowledge necessary to take the decision is accessible and modifiable. In the latter, such knowledge is opaque.

Let's consider two ways of producing a curated feed. The first one involves RSS, a standardized, computer-readable format to gather various content sources. The user manually collects the feeds they want to follow in a list that remains accessible and transformable. The display criteria is generally chronological. Thus, an RSS feed incorporates the user's knowledge of the sources and automatizes the know-how of going through the blogs individually. Indeed, less work. In this case, it is fair to speak of autonomous convenience.

The Twitter feed works differently. The displayed content doesn't only reflect the list of contacts that the user follows, but it includes ads, replies, etc. The display criteria is "algorhythmic", that is, based on some factors unknown to the user, and only very partially manipulable by them. This is a case of heteronomous convenience. While the former is agential since the user can fully influence its workings, the second is behavioral because the user can't.

Broadly, algorhythmic feeds have mostly wiped out the RSS feed savoir faire, overriding autonomous ways of use. The Overton window of complexity was thus reduced. Today, a novel user is thrown into a world where the algorithmic feed is the default, while the old user has to struggle more to maintain their RSS know-how. The expert is burdened with exercising their expertise, while the neophyte is not even aware of the possibility of such expertise.

Blogs stop serving RSS, feed readers aren't maintained, etc. It is not coincidence that Google discontinued its Reader product, with the following message on their page: "We understand you may not agree with this decision, but we hope you'll come to love these alternatives as much as you loved Reader." In fact, Google has been simplifying web activities all along. Cory Arcangel in 2009:

> After Google simplified the search, each subsequent big breakthrough in net technology was something that decreased the technical know-how required for self-publishing (both globally and to friends). The stressful and confusing process of hosting, ftping, and permissions, has been erased bit by bit, paving the way for what we now call web 2.0.

True, alternative do exists, but they become more and more fringe. Graham seems to be right when he says that most user will go for less work. Generally, heteronomous convenience means less work than autonomous convenience, as the maximum amount of decisions is taken by the system in place of the user. Furthermore, heteronomous convenience dramatically influences the perception of the work required by autonomous convenience. Nowadays, the process of collecting RSS feeds URLs *appears* tragically tedious if compared to Twitter's seamless "suggestions for you". :workstation:

"The answer to the question Who knows? was that the machine knows, along with an elite cadre able to wield the analytic tools to troubleshoot and extract value from information." - Zuboff, 2019

On Twitter, we can experience the dark undertones of heteronomous convenience. User Tony Arcieri [developed](twitter.com/bascule/status/130) a worrisome experiment about the automatic selection of a focal point for image previews, which often show only a part of them when tweeted. Arcieri uploaded two versions of a long, vertical image. In one, a portrait of Obama was placed at the top, while one of Mitch McConnell at the bottom. In the second image the positioning was reversed. In both cases the focal point chosen for the preview was McConnell's face. Who knows! The system spares the user the time to make such choice autonomously but its logic is obscure and immutable. Here, convenience is heteronomous.

Does it have to be this way? Not necessarily. Mastodon is an open source, self-hosted social network that at the first glance looks like Twitter, but it's profoundly different. One of the many differences (which I'd love to describe in detail but it would be out of the scope of this text, srry) has to do with focal point selection. Here, the user has the option to choose it autonomously, which means manually. They can also avoid making any decision. In that case, the preview will show the middle of the image by default. :workstation:

(thx @joak for pointing me to this case!)

“The danger that the computer poses is to human autonomy. The more that is known about a person, the easier it is to control him. Insuring the liberty that nourishes democracy requires a structuring of societal use of information and even permitting some concealment of information.” Schwartz, 1989

Show newer

@entreprecariat the fact that these are called "desire paths" is also just wonderful

@entreprecariat Unnecessary complexity is universally reviled, but defining its negation - "ambient" / unavoidable complexity - is often contentious in the details, and subject to revision. Lots of VCs believe the App Store model is a global maximum (mostly bc it delivered unto them Uber et al), which consigns things like file systems and many fundamental read-write qualities to "unnecessary complexity". Reckoning with competing world views unavoidable when imagining a tool's ideal form.

@entreprecariat I find most blogs still offer webfeeds, though we could have nicer feedreaders...

@entreprecariat Certainly the biggest I think is that web browsers stopped advertising the presence of RSS on a webpage!

Chrome never did. Firefox said noone was using it & used that excuse pushing their centralized Pocket service instead. I don't know about Internet Explorer. Safari still does (yay!), but I think their "Social Links" redesign *used to* only support social networking sites.

@alcinnz Google also pushed the decline of use as argument to shut down the Reader

@alcinnz @entreprecariat
Interestingly the old and disregarded Seamonkey still has conventional RSS support and other features around user agency like, for example, an html editor.

@alcinnz To be honest however, I used RSS before but I always hated it, and I still do. It was clumsy and consumption-focussed. It was read-only. Getting access to content was somewhat standardized, unlike the ability to, say, add comments or get into some sort of discussion. Rude take: The stubbornness of the tech community helped Google, Twitter, ... to rise here. We always left "user-friendly" solutions that supported non-techie people in using technology to big ...

@entreprecariat

@alcinnz ... corporations, so it's funny to see today we're surprised it's big corporations and proprietary platforms where a vast majority of users is to be found. 😟

@entreprecariat

@z428 @entreprecariat Certainly we need easier to use RSS clients, but I don't buy that RSS *itself* needs to be anything more than it is. Certainly I wouldn't want to obligate bloggers, etc to republish comments or the like, we can use seperate protocols for the write side.

P2P static hosting is certainly one approach with a big payoff! That'd *help* us compete with many of Silicon Valley's offerings!

Show newer
Sign in to participate in the conversation
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.