Show more

"After Google simplified the search, each subsequent big breakthrough in net technology was something that decreased the technical know-how required for self- publishing (both globally and to friends). The stressful and confusing process of hosting, ftping, and permissions, has been erased bit by bit, paving the way for what we now call web 2.0." Cory Arcangel, 2009

Show thread

"Ten years ago every web site had a section of external links because people felt it was their personal responsibility to configure the environment and build the infrastructure." Olia Lialina, 2005

(not sure if was about a felt responsibility, but the result was indeed that of a reconfiguration)

Show thread

I'm finally starting to bring together all the things I've been reading about the user condition and I'm gonna post WIP sections here. Here's the first one:

As humans, when we are born we are thrown into a world. This world is shaped by things made by other humans before us. These things are what relate and separate people at the same time. Not only we contemplate them, but we use these things and fabricate more of them. In this world of things, we labor, work and act. The labor we perform is a private process for subsistence that doesn't result in a lasting product. Through work, we fabricate durable things. And then we act: we do things that lead to new beginnings: we give birth, we engage with politics, we quit our job. One could arrange these activities according to a scale of behavior: labor is pure behavior, work can be seen as a modulation of behavior, action is *interruption* of behavior. Action is what breaks the “fateful automatism of sheer happening”. This is, in a nutshell, Hannah Arendt's depiction of the human condition. (-->)

Show thread

Let's apply this model to computers. If an alternative world is anywhere to be found, that is inside the computer, since the computer has the ability not only, like other media, to represent things, but to *simulate* those things, and simulate other media as well. Joanne McNeil points out that "metaphors get clucky when we talk about the internet" because the internet, a network of networks of computers, is fundamentally manifold and diverse.<!-- 5 --> But for the sake of the argument, let's employ a metaphor anyway. Individual applications, websites, apps and online platforms are a bit like the things that populate a metropolis: whole neighborhoods, monuments, public squares, shopping malls, factories, offices, private studios, abandoned construction sites, workshops. This analogy emerged clearly at the inception of the web, but it became less evident after the spread of mobile devices. Again McNeil: "As smartphones blurred organizational boundaries of online and offline worlds, spatial metaphors lost favor. How could we talk about the internet as a place when we're checking it on the go, with mobile hardware offering turn-by-turn directions form a car cup holder or stuffed in a jacked pocket?"<!-- 118 -->. (-->)

Show thread

Nowadays, the internet might feel less like a world, but it maintains the "wordly" feature of producing the more or less intelligible conditions of users. In fact, with and within networked computers, users perform all three kinds of activity identified by Arendt: they perform repetitive labor, they fabricate things, and, potentially, they act, which is, they produce new beginnings by escaping prescribed paths, by creating new ones, by not doing what they were expected to do or what they've always been doing before.

Show thread

"To put it another way, the World Wide Computer (The Cloud, ndr), like any other electronic computer, is programmable. Anyone can write instructions to customize how it works, just as any programmer can write software to govern what a PC does. From the user’s perspective, programmability is the most important, the most revolutionary, aspect of utility computing. It’s what makes the World Wide Computer a personal computer—even more personal, in fact, than the PC on your desk or in your lap ever was." Nicholas Carr

Show thread

the user and drug dealers quote is not just bad, it's false

Show thread

more from The User Condition >>>

"Among the three types of activity identified by Hannah Arendt, action is the broadest, and the most vague: is taking a shortcut on the way to the supermarket a break from the "fateful automatism of sheer happening"? Does the freshly released operating system coincide with "a new beginning"? Hard to say. And yet, I find "action", with its negative anti-behavioral connotation, a more useful concept than the one generally used to characterize positively one's degree of autonomy: agency. Agency is meant to measure someone's or something's "capacity, condition, or state of acting or of exerting power". All good, but how do we measure this if not by assessing the very power of changing direction, of producing a fork in path. A planet that suddenly escapes its predetermined orbit would appear "agential" to us, or even endowed with intent. An action is basically a choice, and agency measures the capacity of making choices. No choice, on the contrary, is behavior. The addict has little agency because their choice to interrupt their toxic behavior exists, but is tremendously difficult. In short, agency is the capacity for action, which is in turn the ability to interrupt behavior. (1/2)

Show thread

"Here's a platform-related example. We can postulate a shortage of user agency within most dominant social media. What limits the agency of a user, namely, their ability to stop using such platforms, is a combination of addictive techniques and societal pressures. It's hard to block the dopamine-induced automatism of scrolling, and maybe it's even harder to delete your account when all your friends and colleagues assume you have one. In this case, low agency takes the form of a lock-in. If agency means choice, the choice we can call authentic is *not* to be on Facebook (or WeChat, if you will).

While this is a pragmatic understanding of agency, we shouldn't forget that it is also a very reductive one: it doesn't take into account the clash of agencies at play in any system, both human and non-human ones." (2/2)

Show thread

"But wait, isn't the keyboard the way to escape pre-programmed paths, as it enables the user to write code? Writing code is the deepest interaction possible on a computer!" @despens, 2009

Show thread

"With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us." Zuboff, 2019

Show thread

more from The User Condition >>>

We call "user" the person who operates a computer. But is "use" the most fitting category to describe such an activity? Pretty generic, isn't it? New media theorist Lev Manovich briefly argued that "user" is just a convenient term to indicate someone who can be considered, depending on the specific occasion, a player, a gamer, a musician, etc. This terminological variety derives from the fact, originally stated by computer pioneer Alan Kay, that the computer is a *metamedium*, namely, a medium capable of simulate all other media. What else can we say about the user? In *The Interface Effect* Alexander Galloway points out *en passant* that one of the main software dichotomies is that of the user versus the programmer, the latter being the one who acts and the former being the one who's acted upon. For Olia Lialina, the user condition is a reminder of the presence of a system programmed by someone else. Benjamin Bratton clarifies: "in practice, the User is not a type of creature but a category of agents; it is a position within a system without which it has no role or essential identity […] the User is both an initiator and an outcome." (1/2)

Show thread

Paul Dourish and Christine Satchell recognize that the user is a discursive formation aimed at articulating the relationship between humans and machines. However, they consider it too narrow, as interaction does not only include forms of use, but also forms of *non-use*, such as withdrawal, disinterest, boycott, resistance, etc. With our definition of agency in mind (the ability to interrupt behavior and break automatisms), we might come to a surprising conclusion: within a certain system, the non-user is the one who possesses maximum agency, more than the standard user, the power user, and maybe even more than the hacker. To a certain extent, this shouldn't disconcert us too much, as often with the ability to refuse concides with power. Often, the very possibility of breaking a behavior or not acquiring it in the first place, betrays a certain privilege. We can think, for instance, of Big Tech CEOS that fill the agenda of their kids with activities to keep them away from social media. (2/2)

Show thread

more from The User Condition >

In her essay, Olia Lialina points out that the user preexisted computers as we understand them today. The user existed in the mind of people imagining what computational machines would look like and how they would relate to humans. These people were already consciously dealing with issues of agency, action and behavior. A distinction that can be mapped to notions of action and behavior has to do with creative and repetitive thought, the latter being prone to mechanization. Such distinction can be traced back to Vannevar Bush.

Show thread

In 1960, J. R. Licklider, anticipating one of the cores of Ivan Illich's critique, noticed how often automation meant that people would be there to help the machine rather than be helped by it. A bureaucratic "substitution of ends" would take place. In fact, automation was and is often semi-automation, thus falling short of its goal. This semi-automated scenario merely produces a "mechanically extended men". The opposite model is what Licklider called "Man-Computer Symbiosis", a truly "cooperative interaction between men and electronic computers". The Mechanically Extended Man is a behaviorist model because decisions, which precede actions, are taken by the machine. Man-Computer Symbiosis is a bit more complicated: agency seems to reside in the evolving feedback loop between user and computer. Human-computer Symbiosis would "enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs". Behavior, understood here as clerical, routinizable work would be left to computers, while creative activity, which implies various levels of decision making, would be the domain of both.

Show thread

Alan Kay's pioneering work on interfaces was guided by the idea that the computer should be a medium rather than a vehicle, its function not pre-established (like that of the car or the television) but reformulable by the user (like in the case of paper and clay). For Kay, the computer had to be a general-purpose device. He also elaborated a notion of computer literacy which would include the ability to read the contents a medium (the tools and materials generated by others) but also the ability to write in a medium. Writing on the computer medium would not only include the production of materials, but also of tools. That is for Kay authentic computer literacy: "In print writing, the tools you generate are rhetorical; they demonstrate and convince. In computer writing, the tools you generate are processes; they simulate and *decide*."

Show thread

More recently, Shan Carter and Michael Nielsen introduced the concept of "artificial intelligence augmentation", namely, the use of AI systems to augment intelligence. Instead of limiting the use of AI to "cognitive outsourcing" (_AI as an oracle, able to solve some large class of problems with better-than-human performance_), AI would be a tool for "cognitive transformation" (_changing the operations and representations we use to think_).

Show thread

Through the decades, user agency meant freedom from predetermined behavior, ability to program the machine instead of programming it, decision making, cooperation, break from repetition, functional autonomy. This values and the concerns deriving from their limitation were already present since the inception of the science that propelled the development of computers. One of most present fears of Norbert Wiener, the founding father of cybernetics, was fascism. With this word he didn't refer to the charismatic type of power in place during historical dictatorships. He meant something more subtle and encompassing. For Wiener, fascism meant "the inhuman use of human beings", a predetermined world, a world without choice, a world without agency. Here's how he describes it in 1950:

> In the ant community, each worker performs its proper functions. There may be a separate caste of soldiers. Certain highly specialized individuals perform the functions of king and queen. If man were to adopt this community as a pattern, he would live in a fascist state, in which ideally each individual is conditioned from birth for his proper occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker.

Show thread

epiphany of the day: when we make a drawing in Illustrator, we are writing with the Illustrator vehicle, but we are merely *reading* the computer medium

Show thread

#WIP from The User Condition essay 

In the 80's, Apple came up with a cheery, Coca Cola-like [ad](youtube.com/watch?v=JLXjfhtgtf) with people of all ages from all around the world using their machine for the most different purposes. The commercial ended with a promising slogan: "the most personal computer". A few decades afterwards, Alan Kay, who was among the first to envision computers as personal devices<!-- history-computer.com/Library/K -->, was not impressed with the state of computers in general, and with those of Apple in particular.

For Kay, a truly personal computer would encourage full read-write literacy.Through the decades, however Apple seemed to go in a different direction: cultivating an allure around computers as lifestyle accessories, like a pair of sneakers. In a sense, it fulfilled more than other companies the consumers' urge to individualize themselves. Let's not, though, look down on the accessory value of a device and the sense of belonging it creates. It should be enough to go to any hackerspace to recognise a similar logic, but with a Lenovo (or more recently a Dell) in place of a Mac.

Show thread

#WIP from The User Condition essay 

And yet, Apple's computer-as-accessory actively reduced read-write literacy. Apple placed creativity and "genius" at the surface of preconfigured software. Using Kay's terminology, Apple's creativity was relegated to the production of materials: a song composed in GarageBand, a funny effect applied to a selfie with Photo Booth. What kind of computer literacy is this? Counterintuitively, what is a form of writing within a software *vehicle*, is often a form of reading the computer *medium*. We only write the computer medium when we do not simply generate materials, but tools. A term coined by Robert Pfaller in a different context seems to fit here: *interpassivity*. Don't get me wrong, not all medium writing needs to happen on an old-style terminal, without the aid of a graphical interface. Writing the computer medium is also designing a macro in Excel or assembling an animation in Scratch.

Show thread

#WIP from The User Condition essay 

Then, the new millennium came and mobile devices with it. At this point, the hiatus between reading and writing grew dramatically. In 2007 the iPhone was released. In 2010, The iPad was launched. Its main features didn't just have to to do with not writing the computer medium, but not writing at all: among them, browsing the web, watching videos, listening to music, playing games, reading ebooks. The hard keyboard, the "way to escape pre-programmed paths" according to Dragan Espenschied, disappeared from smartphones. Devices had to be jailbroken. Software was compartimentalized into apps. Screens became small and interfaces lost their complexities to fit them. A "rule of thumb" was established. Paraphrasing Kay again, simple things didn't stay simple, complex things became less possible.

Show thread

#WIP from The User Condition essay 

Google Images confirms that we are still anchored to pre-mobile idea of computers, a sort of skeuomorphism of the imagination. We think desktops or, at most, laptops. Instead, we should think of smartphones. In 2013, Michael J. Saylor [started noticing a shift](books.google.nl/books?id=8P4lD): “currently people ask, ‘Why do i need a tablet computer or an app-phone \[that’s how he calls a smartphone\] to access the Internet if I already own a much more powerful laptop computer?’. Before long the question will be, 'Why do I need a laptop computer if I have a mobile computer that I use in every aspects of my daily life?’” If we are to believe the [CNBC](cnbc.com/2019/01/24/smartphone), he was right. Their recent title was “Nearly three quarters of the world will use just their smartphones to access the internet by 2025”. Right now, in the US, a person is more likely to possess a mobile phone than a desktop computer (81% vs 74% in the US according to the [Pew Center](pewresearch.org/internet/fact-)) and I suspect that globally the discrepancy is higher. A person’s first encounter with a computer will soon be with a tablet or mobile phone rather than with a PC. And it's not just kids: the first computer my aunt used in her life is he smartphone. The PC world is turning into a Mobile-first world.

Show thread

#WIP from The User Condition essay 

There is, finally, another way in which the personal in personal computers has mutated. The personal became personalized. In the past, the personal involved not just the possession of a device, but also one's own know-how, a *savoir faire* that a user developed for themselves. A basic example: organizing one's music collection. That rich, intricate system of directories and filenames each of us came individually up with. Such know-how, big or small, is what allows us to build, or more frequently rebuild, our little shelter within a computer. Our *home*. When the personal becomes personalized, the knowledge of the user's preferences and behavior is first registered by the system, and then made alien to the user themelves. There is one music collection and it's called iTunes, Spotify, YouTube Music. Oh, and it's also a mall where advertising forms the elevator music. Deprived of their savoir faire, the users receive an experience which is tailored on them, but they don't know exactly how. Why *exactly* is our social media timeline ordered? We don't know, but we know it's based on our prior behavior. Why exactly does the autosuggest present us with that very word? We assume it's a combination of factors, but we don't which ones.

Show thread

#WIP from The User Condition essay 

Let's call it impersonal computing (or … <!-- footnote -->), shall we? Its features: computer accessorization at the expense of an authentic computer literacy, mobile-first asphyxia, dispossession of an intimate know-how. A know-how that, we must admit, is never fully annihilated: tactical techniques emerge in the cracks: small hacks, bugs that become features, eclectic worksflows. The everyday life of Lialina's Turing-Complete user is still rich. That said, we can't ignore the trend. In an age in which people are urged to "learn to code" for economic survival, computers are commonly used less as a medium than as a vehicle. The utopia of a classless computer world turned out to be exactly that, a utopia. There are users and there are coders.

Show thread

"There are endless possibilities as to what a could be. What kind of room is a website? Or is a website more like a house? A boat? A cloud? A garden? A puddle? Whatever it is, there’s potential for a self-reflexive feedback loop: when you put energy into a website, in turn the website helps form your own identity." - Laurel Schwulst

thecreativeindependent.com/peo

Show thread

"Google’s ideal society is a population of distant users, not a citizenry" Zuboff 2019

Show thread

An all-inclusive computer literacy for the many was never a simple achievement. Alan Kay recognized this himself:

> The burden of system design and specification is transferred to the user. This approach will only work if we do a very careful and comprehensive job of providing a general medium of communication which will allow ordinary users to casually and easily describe their desires for a specific tool.

Maybe not many users felt like taking on such burden. Maybe it was simply too heavy. Or maybe, at a certain moment, the burden started to *look* heavier that it was. Users' desires weren't expressed by them with the computer medium. Instead, they were defined *apriori* within the controlled setting of interaction design: theoretical user journeys anticipated and construed user activity. In the name of user-friendliness, many learning curves were flattened.

Show thread

Maybe that was users' true desire all along. Or, this is, at least, what computer scientist and entrepreneur Paul Graham thinks. In 2001, he recounted: "[…] near my house there is a car with a bumper sticker that reads 'death before inconvenience.' Most people, most of the time, will take whatever choice requires least work." He continues:

> When you own a desktop computer, you end up learning a lot more than you wanted to know about what's happening inside it. But more than half the households in the US own one. My mother has a computer that she uses for email and for keeping accounts. About a year ago she was alarmed to receive a letter from Apple, offering her a discount on a new version of the operating system. There's something wrong when a sixty-five year old woman who wants to use a computer for email and accounts has to think about installing new operating systems. Ordinary users shouldn't even know the words "operating system," much less "device driver" or "patch."

Show thread

So who should know these words? "The kind of people who are good at that kind of thing" says Graham. His vision seems antipodal to Kay's one. Given a certain ageism permeating the quote, one is tempted to root for Kay without hesitation, and to frame Graham (who's currently 56) as someone who wants to prevent the informatic emancipation of his mother. But is it really the case? The answer depends on the cultural status we attribute to computers and the notion of autonomy we adopt.

We might say, with Graham, that his mother is made less autonomous by some technical requirements she doesn't need nor want to deal with. For her, having a computer functioning like a slightly smarter toaster is good enough. Most of the computer's technical complexity, together with its technical possibilities, are alien to her. They are a waste of time and source of worry to her, a burden. Moreover, in order to continue using her machine, she might be forced to familiarize with a new operating system.

Show thread

On the other hand, we might say, keeping in mind Kay's vision, that the autonomy of Graham's mother was eroded "upstream", as she has been using the computer as an impersonal vehicle, unaware of its profound possibilities. If you believe that society at large is at a loss by using the computer as a smart toaster, than you're with Kay. If you think that is fair, then you're with Graham. But are this two views really in opposition?

Let's consider an actual smart toaster, one of those Internet-of-Things devices. Your smart knows the bread you want to toast and the time that it takes. But, one day, out of the blue, you can't toast your bread because you haven't updated the firmware. You couldn't care less of the firmware: you're starving. But you learn about it and update the device. Then, the smart toaster doesn't work anymore as it used to: settings and features have changed. What we witness here is a reduction of agency, as you can't interrupt the machine's update behavior. Instead, you have to modify your behavior to adapt to it. Back to Graham's mother: the know-how she laboriously acquired, the desire for a specific tool that she casually developed thorugh time, might be suddenly wiped out by a change she never asked for.

Show thread

Alan Kay has a motto: "simple things should be simple, complex things should be possible". Above, we focused on complex things becoming less possible. But what about simple things? Often, they don't stay simple either. True, without read-write computer literacy a user is stuck in somewhat predetermined patterns of behavior, but the personal adoption of these patterns often forms a know-how. If that is the case, being able to stick to them can be seen as a form of agency. Interruption of behavior means aborting the update. :workstation:

Show thread

The revolution of behavioral patterns is often sold in terms of convenience, namely less work. Less work means less decisions to make. Those decisions are not magically disappearing, but are simply delegated to an external entity that takes them automatically. In fact, we can define convenience as automated know-how or automated decision-making. We shouldn't consider this delegation of choice as something intrinsically bad, otherwise we would end up condemning the computer for its main feature: programmability. Instead, we should discern between two types of convenience: autonomous convenience and heteronomous convenience. In the former, the knowledge necessary to take the decision is accessible and modifiable. In the latter, such knowledge is opaque.

Let's consider two ways of producing a curated feed. The first one involves RSS, a standardized, computer-readable format to gather various content sources. The user manually collects the feeds they want to follow in a list that remains accessible and transformable. The display criteria is generally chronological. Thus, an RSS feed incorporates the user's knowledge of the sources and automatizes the know-how of going through the blogs individually. Indeed, less work. In this case, it is fair to speak of autonomous convenience.

Show thread

The Twitter feed works differently. The displayed content doesn't only reflect the list of contacts that the user follows, but it includes ads, replies, etc. The display criteria is "algorhythmic", that is, based on some factors unknown to the user, and only very partially manipulable by them. This is a case of heteronomous convenience. While the former is agential since the user can fully influence its workings, the second is behavioral because the user can't.

Broadly, algorhythmic feeds have mostly wiped out the RSS feed savoir faire, overriding autonomous ways of use. The Overton window of complexity was thus reduced. Today, a novel user is thrown into a world where the algorithmic feed is the default, while the old user has to struggle more to maintain their RSS know-how. The expert is burdened with exercising their expertise, while the neophyte is not even aware of the possibility of such expertise.

Show thread

Blogs stop serving RSS, feed readers aren't maintained, etc. It is not coincidence that Google discontinued its Reader product, with the following message on their page: "We understand you may not agree with this decision, but we hope you'll come to love these alternatives as much as you loved Reader." In fact, Google has been simplifying web activities all along. Cory Arcangel in 2009:

> After Google simplified the search, each subsequent big breakthrough in net technology was something that decreased the technical know-how required for self-publishing (both globally and to friends). The stressful and confusing process of hosting, ftping, and permissions, has been erased bit by bit, paving the way for what we now call web 2.0.

True, alternative do exists, but they become more and more fringe. Graham seems to be right when he says that most user will go for less work. Generally, heteronomous convenience means less work than autonomous convenience, as the maximum amount of decisions is taken by the system in place of the user. Furthermore, heteronomous convenience dramatically influences the perception of the work required by autonomous convenience. Nowadays, the process of collecting RSS feeds URLs *appears* tragically tedious if compared to Twitter's seamless "suggestions for you". :workstation:

Show thread

"The answer to the question Who knows? was that the machine knows, along with an elite cadre able to wield the analytic tools to troubleshoot and extract value from information." - Zuboff, 2019

Show thread
Follow

On Twitter, we can experience the dark undertones of heteronomous convenience. User Tony Arcieri [developed](twitter.com/bascule/status/130) a worrisome experiment about the automatic selection of a focal point for image previews, which often show only a part of them when tweeted. Arcieri uploaded two versions of a long, vertical image. In one, a portrait of Obama was placed at the top, while one of Mitch McConnell at the bottom. In the second image the positioning was reversed. In both cases the focal point chosen for the preview was McConnell's face. Who knows! The system spares the user the time to make such choice autonomously but its logic is obscure and immutable. Here, convenience is heteronomous.

Does it have to be this way? Not necessarily. Mastodon is an open source, self-hosted social network that at the first glance looks like Twitter, but it's profoundly different. One of the many differences (which I'd love to describe in detail but it would be out of the scope of this text, srry) has to do with focal point selection. Here, the user has the option to choose it autonomously, which means manually. They can also avoid making any decision. In that case, the preview will show the middle of the image by default. :workstation:

(thx @joak for pointing me to this case!)

“The danger that the computer poses is to human autonomy. The more that is known about a person, the easier it is to control him. Insuring the liberty that nourishes democracy requires a structuring of societal use of information and even permitting some concealment of information.” Schwartz, 1989

Show thread

Heteronomous convenience is an automated know-how, a savoir faire turned into a silent procedure, a set of decisions taken in advance for the user. Often, this type of convenience goes hand in hand with the removal of friction, that is, laborious decisions that consciously interrupt behavior. Let's consider a paginated set of items, like the results of a Google Search or DuckDuckGo query. In this context, users have to consciously click on a button to go to the next page of results. That is a minimal form of action, and thus, of friction. Infinite scroll, the interaction technique employed by, for instance, Google Images or Reddit, removes such friction. The mindful action of going through pages is turned into a homogeneous, seamless behavior.

Show thread

And yet, this type of interaction seems somehow old-fashioned. Manually scrolling an infinite webpage feels imperfect, accidental, temporary if not already antiquated, even weird one could say: it’s a mechanical gesture fitting the list's needs<!-- clarify -->. It’s like turning a crank to listen to a radio. It's an automatism that hasn't been yet automatized. This automatism doesn't produce an event (such as clicking on a link) but modulates a rhythm: it's analog instead of digital. In fact, it has been already automatized. Think of YouTube playlists which are reproduced automatically, or Instagram stories (a model originated in Snapchat that spread to Facebook and Twitter), where the behavior is reversed: the user doesn't power the engine, but instead stops it from time to time. In the playlist mode, "active interaction" is an exception.

Show thread

We see here a progression that is analogous to that of the Industrial Revolution: first, some tasks are just unrelated to one another (hyperlinks and pagination, pre-industrial), they are then organized to require manual and mechanical labor (infinite scroll, industrial), finally they are fully automated and only require supervision (stories and playlists, smart factory). Pagination, infinite scroll, playlist. Manual, semi-automated, fully automated. Click, scroll, pause.

Show thread

Late French philosopher Bernard Stiegler focused on the notion of proletarianization: according to him, a proletarian is not just robbed of the form and the products of their labor, but especially of their know-how.<!-- verify --> Users are deprived of the rich, idiosyncratic fullness of their gestures. These gestures are then reconfigured to fit the system's logic before being made made completely useless. The gesture is first standardized and then automated. The mindless act of scrolling is analogous to the repetitive operation of assembling parts of a product in a factory. Whereas the worker doesn't leave their position, the user doesn't leave the page. Both feature movement without relocation. Furthermore, in the factory, machines are organized according to an industrial know-how which makes it the only one that fully understands the functional relationships between parts. How do we call a computational system organized like such factory? We can call it a platform and define it as a system that extracts and standardizes user decisions before rendering them unintelligible and immutable. In the platform, opaque algorithms embody the logic that arranges data into lists that are then fed to the user. The platform-factory is smart and dynamic, the user-worker is made dumb and static. :workstation:

Show thread

"The most profound technologies are those that disappear. They
weave themselves into the fabric of everyday life until they are indistinguishable
from it.” Wieser 1991

Show thread

"The ENIAC itself, strangely, was a very personal computer. Now we think of a personal computer as one you carry around with you. The ENIAC was actually one that you kind of lived inside". Harry Reed quoted in New Dark Age by James Bridle

Show thread

@entreprecariat It's in an old GDR encyclopedia. I think "Meyers Neues Lexikon", Leipzig, 1964, P. 531. I have the page but I don't know which folio. It's the entry for "Automatisierung". I can scan the whole page if you're interested. It goes into all kinds of (frankly rather boring) detail including a contrast between capitalist and socialist implementations.

@entreprecariat Ha! That's gorgeous.

Sadly, no. It's a kind of sea-foam green with a few boring stripes on the spine.

I cut out the Automatisierung Page for that lovely pictogram and only kept Folio 6 - Muscat – Ribot.

@entreprecariat Mucho interesting. Is there a place where you log your thoughts in a more perennial way than a Mastodon flow?

Sign in to participate in the conversation
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.