free media alliance

 free software, free culture, free hardware



download pdf: https://freemedia.neocities.org/freemedia.pdf

download odt for editing: https://freemedia.neocities.org/freemedia.odt

free media

from computers to culture

first draft version, july 2018

license: creative commons cc0 1.0 (public domain)

http://creativecommons.org/publicdomain/zero/1.0/

preface

the free software foundation was founded in 1984. around that time, i was a young boy learning to use the new pc we had in our home.

the computer was incredibly easy to use-- just like a typewriter, but much more powerful and elegant (as long as youre ok with a big ugly screen. at least it saves paper.)

unlike most computers of its day, ours had a mouse attached. this was not a gimmick, nor did it help us run programs. when we ran the paint application, we could move the cursor using a joystick, light pen or mouse.

the light pen was heavy, fragile and expensive. the joystick was not useless for drawing. the mouse was ideal.

all we had to do is type pbrush and hit the enter key. then our graphical program would start. it was absolutely simple.

according to sam williams, richard stallman founded the free software movement because of a printer. the printer driver would have likely benefited from some maintenance, and stallman went to the drivers author to obtain the source code so he could make changes to it. citing a non-disclosure agreement, the author was unable to share it, and so began stallmans efforts to reform the way software is distributed.

at the core of the free software concept is a community of developers. after toying with non-commercial-only sharing, the free software movement defined itself around 4 necessary freedoms, which i like to simplify as:

0. the freedom to use the software

1. the freedom to study the software

2. the freedom to share the software

3. the freedom to modify the software

once you are familiar with the 4 freedoms (in their simplified or original form) its really not necessary for these to translate perfectly to media.

being able to use media in any setting is easy to follow-- being able to share media is something most people want-- being able to remix or modify media is easy to understand-- what about studying the software?

this is about giving the user the source code, and allowing "reverse engineering" as well. its about being able to take the software apart. this translates to media just fine, particularly if you think of other media as software. for example, there are programming languages that allow you to produce music using code. these are not just text formats, they contain information about loops. these are features of coding.

there are many challenges facing the fsf today, and many challenges facing free software, free culture and free (as in freedom) hardware.

the names are not the real problems here; free software has operating systems, that was one of the main goals of free software-- to give everyone an operating system that would respect their freedom-- the free software foundation has accomplished many things that no other organisation has.

if stallman retired tomorrow, and for some bizarre reason, made me the president of the fsf, i would make a number of substantial changes.

instead, i started my own organisation. this book is about why, what lead to it, what we may need and what we can do.

every charity competes to make things happen for the public good. i am not easy to impress, and i have spent years being dissatisfied. if nothing else, i hope i can present software and cultural freedom to many new people who the fsf and free culture foundation missed. im already doing that, but this organisation is a tool to do more of it.

chapter 1: computing-- the gnu operating system

computer systems have layers, and the layers serve different purposes. most users are trained from the top layer, and dont consider the others. they would rather pay someone to go into other layers for them, which is absolutely fine-- i dont have the tools or experience to fix serious plumbing issues, but i know how to use a phone.

i would be really irritated however, if some plumbers came into my house on a regular basis, tore up the walls whenever they cared to, and changed around all my fixtures whether i wanted them to or not.

i would be even more irritated if the new installations lowered the water pressure in the shower every morning, were ugly to look at, and resulted in me having to call additional plumbers more often.

"listen here, i didnt even want these changes."

"well, the changes comes with the house."

"what? i bought a house, nobody said anything about plumbers coming in unannounced to make changes."

"oh yeah, its in the end user license agreement. read pages 14 and 15. but dont worry, the price is included."

"whats included?"

"some of the cost of your house pays for the plumbers and fixtures."

"you mean you sold me fixtures i didnt know about, and hired plumbers i dont want, and youre paying them with money i might have saved on the price of my house?"

"yeah, you didnt think you got those for free, did you?"

"but i dont want them!"

"well, youll have to install your own plumbing then. this will void any warranty youve got on it now."

software at least, works a little differently in practice than plumbing sometimes.

now imagine that plumbing is a skill you picked up in school, that you were good enough with plumbing to help out your friends and family, and that a lot of things you inadvertently hired these plumbers to do (by paying more for your house) are things you could have done yourself, done better for cheaper, or even avoided because they were unnecessary and degraded the performance of your other plumbing.

you would probably be pretty outraged. but the software industry has certainly banked on the fact that people will tolerate this sort of arrangement for a long time. as someone said recently: most people dont know how their car works. this is probably true, but that doesnt mean it isnt worth bothering to learn more about your car.

even if you dont work on cars for a living (and dont really want to work on cars), there are other reasons to learn about it. first, your car is expensive to own-- if you put all your trust in mechanics (i happen know of a really good one, but they cant all be above average) then some of them are going to see you coming, and take you for a ride with your wallet as well as your car.

"oh yeah, id definitely get the transmission tuneup."

"but i just had that 3 months ago!"

"right, but-- do you drive in a small town a lot?"

"well yeah, i dont live in the city."

"ok, so id definitely get the tuneup."

i know good mechanics, good computer techs, good franchise owners and good managers-- the sort of people who try to sell people things they actually want or need, not things they dont. the reason is they like to make people happy, and a good reputation is worth a lot more (to their business) than a quick buck.

but theres no way for most people to stop being sold things they dont really want or need, unless they learn something about what theyre buying.

"oh, i heard these guys were good."

"does the person who recommended them know a lot about cars?"

"no, but they got a transmission tuneup and said they were very quick about it."

"ahh, okay."

if you think i am bragging about my car knowledge, i hardly know anything about cars. but i have friends that have very reliable cars, who have extremely good technicians, and they

dont all charge unreasonable prices. but they wont cheat a

customer. and i wouldnt take a car to someone other than them unless i had to.

another good reason to learn something about cars is if youre going on a long road trip, and you hear a noise that convinces you to get it checked before you go-- preventing you from getting stranded in a hot desert climate in the summer.

maybe you just had the car checked, but you knew what the noise meant and took it back. knowing one noise just saved your vacation.

so i dont trust business models that promise me i wont have to learn anything-- if i go to a doctor, i want them to talk with me about things. if i go to a mechanic, i want to talk to the mechanic about the car-- i assume they know more than i do, but i certainly benefit from learning something.

if i seek a computer technician (and i have, since hardware is the layer im not wild about spending too much time with) then we are going to talk about computers, even if theyre ten times more qualified and experienced with commercial repair

because they work on several machines every single day.

and im going to try to earn their respect (its not as hard as youd think, they deal with a lot of people who dont care about what they do or fully appreciate it) and theyre going to need to earn mine, or i will do business with someone else instead.

i would make the point very strongly that no matter who you trust to outsource your problems to, that its worth learning a little more about the things you hire professionals for-- even if it makes you one of those irritating customers who think they know everything.

i try not to be that guy, though. after all, im paying someone for having skills and experience that go beyond mine. asking questions is one thing, but second guessing everything they say or do just to be cheap wont make me look smarter or save me money-- if theyre not good, theyll find a way to charge me extra. if theyre good, and get tired of dealing with me, theyll tell me to go to someone who is not as good.

but if youre really learning, then at least youre learning.

so weve talked about plumbers that come "free" with the house, weve talked about hiring professionals and why youd want to know something about cars even if someone else works on them.

my feeling is, if these are skills you learned enough about in

school (maybe the science teacher does a lot of do-it-yourself at home, and decided to teach everyone how to fix a leaky faucet while talking about physics) then youre not going to want to pay for someone to do a shoddy job that you know is shoddy.

so these layers in your computer, they give you access to basically everything. they let you add things, remove things, change things.

theres a lot of power in those layers, power which you dont need to be afraid of. and i do mean "afraid"-- it might not keep people up at night, but i believe our educational system (even with a great teacher here and there) produces a visceral fear of learning in many people, which turns into to a real fear of breaking the computer.

that fear is not 100% unfounded. you can break your computer. you can hit it with a hammer and do real damage, or give it a command to delete all files.

you can detach your kitchen sink faucet with the water still going to it, and actually flood your kitchen. you can put your car in neutral, and get behind and push it right into a lava flow from a live volcano. this book will not recommend you do any of those things, but you may want to delete files.

before you go deleting things however, my recommendation is that people who arent natural tinkerers find a second machine to learn on. this way their learned fear of breaking the computer is focused on a computer theyre not so afraid to break.

but this book wont require you to do a lot of things with hardware. if you get a laptop to learn on, you probably wont prefer to open it and you probably dont need to. if you get a desktop, its easier to open but probably still optional.

for now, we are still talking about a hypothetical second machine, not an actual purchase. you can also use some old laptop or desktop you arent using anymore, if you copy the files you needed from it (if you dont need the files on it, thats perfect.)

the top layer of your computer is the applications and files youve added yourself.

under that is the default graphical shell and applications, under which is the text shell.

the text shell is what i used when i was about 5. it really wasnt scary, we had no reason to fear it. if you typed pbrush it ran the paint program. today, people say: "ok, waze, where is mcdonalds?" and this is the same idea but

with voice instead of typing. its no big deal. i will even try that now:

bash-4.4# ok waze, where is mcdonalds?

bash: ok: command not found

bash-4.4#

this the computers way of saying "im sorry, i didnt get that..." and then it gives me another "prompt" (the bit of text on the left) as its say of saying: "but you can try telling me in a different way."

if waze were a pc application instead of a phone app, its developers could even make it do this:

bash-4.4# waze where is mcdonalds?

ok, looking for: mcdonalds

bash-4.4#

not scary! (maybe a little creepy, but not scary.)

why is the text shell called bash? its named after an earlier shell and one of the shell developers, whose name was steve bourne. the "sh" is for "shell." normally the prompt does not say "bash" in it, mine does because the person who setup the defaults for my machines configuration set that, and i didnt change it.

we were talking about layers:

1. your programs and files

2. the graphical shell and default applications

3. the text shell and default text applications

4. the kernel, drivers and peripheral firmware

5. the bootloader

6. the main computer firmware

7. the physical, actual computer

you dont have to be able to list all these layers, but theyre what your computer basically consists of.

layer 1 you add yourself, layer 7 you put on a desk or other piece of furniture, or perhaps by your feet.

layers 2-6 can in theory all be replaced with free software, and often this is true also in practice.

this is why the gnu operating system was developed.

2. graphics apps 3. text apps 4. kernel 5. bootloader

the free software foundation first started releasing free software in the 1980s, and people who used unix-based operating systems began using gnu alternatives instead.

it would be years before i heard of the fsf, and i was learning how to use windows 3.0 and 3.1 as the gnu system was coming together.

to get to windows was as easy as typing "win" at the prompt:

c:\>win

and then hitting enter. i grew to like using windows, but it was weird at first. even though i was used to the mouse, the concept of selecting text with the mouse didnt seem as "natural" as using the arrow keys. i eventually got used to doing it that way.

so much of what seems natural is about what you learn first, whether its a design interface or just a command name. note that even then, windows would also let you select text with the arrow keys. its different when your hand is already on the mouse though.

i continued teaching myself about computers and coding, i really enjoyed coding with quickbasic and i would continue to enjoy using it for the next two decades or more. in finland, a university student named linus torvalds would announce his plans to write an operating system. he was aware of the gnu system, and the platform he was developing for was the 386.

intel-compatible machine models were based on the

main chip, so they went "386, 486, pentium, pentium ii, pentium iii, pentium 4..." there is a pentium 4 running next to me as i type this. the primary component of torvalds operating system was the kernel, which would come to be known in the early 1990s as "linux."

the mid 90s came and windows 3.1 still felt like a (very) fast, very able graphical shell and application platform for the pc. it was possible to run most text applications in a text window now, or they could run "full screen" like they did when windows wasnt running. i still didnt like apple.

in 1994, a user-friendly guide to the internet consisted almost entirely of links to gopher sites. gopher was a precursor to the web, the internet most people think of today-- in 1994, the web barely had caught on yet. librarians and scientists and some very interesting geeks (such as the people developing gnu and linux) used the internet.

many of us used "bulletin board systems," which were networks formed by users dialing into some computer (or a few computers) over telephone lines-- sometimes with extra charges for the long-distance connections.

internet cafes started letting people come in and use the web, i got out of high school and started a single college class. i spent most of my time in the computer lab, teaching myself how to make websites and reading about free dos.

in 1999, the only real experience i had with gnu was a floppy disk formatted with a mini gnu system that was called "tomsrtbt." to me it seemed like a dos disk with fewer features, or more features i wasnt familiar with.

dos was the system i used to run windows, and free dos was an effort to make a version of dos that you could share with everyone. i thought this was great. tomsrtbt was more of a curiosity for me, but i wanted to try this "linux operating system" id read about. so far, it didnt seem like very much.

a few years prior, microsoft had come out with windows 95 and eventually windows 98. when i first heard about 95, it sounded cool. it looked kind of cool, but i really didnt think it was necessary. it seemed wasteful of computer power, it seemed like it broke a lot of things, i didnt think it was worth switching for the first few years. this is windows im talking about, and a similar sentiment came about with vista.

another thing i didnt like was that windows 95 generally needed a cd to install. looking back, this is pretty reasonable, but at the time it didnt seem to be. dos would install on a few floppies (yes, we really used them) and windows 3.1 would install from only 6 floppies-- often fewer! the way most people got software was to buy a box with a few floppies in it, or maybe a cd. but this was for installing the os.

id already owned a cd writer when i was younger, but cd readers were not in every computer and forcing people to

install one just to put windows on there seemed a bit much.

a couple things changed my mind about that. first, it has to be said that there was a distribution of windows 95 that occupied a large handful of floppies, even if it was not very common in my travels. but also, i learned how to copy the install files from the cd to the hard drive and run it from there after booting from a floppy.

this made it far easier to create backups, do installations with or without a cd reader, and with 486 boxes getting cheaper and cheaper it seemed like windows 95 would finally be tolerable.

i got my first mini tower between 1998 and 2001, with windows 95 on it for free. the owner had installed too many things, and instead of removing them had simply purchased a new one. i was floored. i said "you know i can probably fix this in 30 minutes, right?" he said he didnt care, keep it, whatever. i learned a lot about windows 95. before that, id used windows 3 pretty exclusively.

i was already using the internet with windows 3.1 and the

dialer that came with my isp. i would eventually get dialup internet working on windows 95 also. the screen on my laptop died and i had fun removing it, using my first colour laptop (a 386) as a very small desktop connected to a monitor. this is a great use of laptops with broken screens.

i learned a little more about hardware; prior to this i was really just a software guy. the girl i was dating said of my work with dos: "this is cool, you should make an operating system and teach people about it or share it with people." it was something like that, she was excited about it.

i explained that it wasnt possible for several reasons-- dos was relatively friendly (yes, you can laugh) but i couldnt just redistribute it, it wasnt licensed for that. there was this new "linux" system, though so far id barely managed to get it to do anything.

i wanted to install the text parts first, then figure out how to get the graphical stuff installed. and i didnt know anybody that could help me figure out how. and the internet really didnt seem to give me very much information about it-- at least not information i understood.

one of the things about being unfamiliar with a system is that you really dont know what to search for.

i left the country, i came back later that year, i heard about

"ubuntu" and ordered a cd for it.

you would get a stack of cds back then, half were "install" cds and the other half were "live" cds. a live cd was like a dos boot disk-- you could just put it in the computer and load it without installing anything. cool.

im trying to think what sort of desktop i had at the time. it wasnt a tower, i think it had 64 mb of ram. ubuntu recommended 128 back then.

i didnt know how to install it, but i knew i wanted to try it first. so i tried it, it took a very long time to boot. i mean the longest boot time id ever experienced in my life, it took somewhere between 14 and 35 minutes to get running. today i would run isohybrid on it and dd it to the hard drive.

you better believe it was slow when it ran. this was interesting, but useless. i took my ubuntu cds to a friends place, i had slightly better luck on her computer. it ran.

now to make it work with dialup internet! if i could get that working, maybe i could... i spent the next couple years trying to find ways to get on dialup internet with ubuntu or anything like it. my service provider was not helpful. the community suggested various things that were far more complicated than connecting with windows. none of them worked.

i also didnt have any portable computers with a network port. i went to pick one up for my laptop, i bought two-- neither worked with puppy, the distro i was using more and more. the community suggested ndiswrapper, a very irritating way to setup a windows driver on a non-windows system. it didnt work either.

by 2006, i was playing with several "distros" or

"distributions" or "makes and models" of this gnu/linux system. it was very frustrating, because i had to boot into windows to use the internet, i couldnt boot ubuntu on my favourite laptop, i couldnt boot most distros on my favourite laptop, dsl worked from a cd but i was using puppy on the same partition as windows thanks to the grub4dos bootloader.

all i had to do was reboot, select the system to run, and run it. this was still fairly new to me.

i got a new dell, it used a newer hard drive but at the time puppy supported mainly the older kind, i traded my new laptop for an older but similar one, just so i could keep running puppy from the hard drive.

and i couldnt get the network card to work.

finally, after two years of messing with ubuntu, i managed to run both dsl (on my favourite laptop) and xubuntu (on my newer laptop) and get that stupid network card working with both.

more than that, i had moved to a place that had high-speed internet. now i could actually use those network cards. alas, as much as i liked puppy, it didnt support them. it likely would, today. i enjoyed using xubuntu (instead of windows xp) and dsl (instead of windows 9x.)

i continued to explore the gnu/linux ecosystem. in 2007, it was a lot harder to find good hardware support across distros. youd buy a device, and be really unsure if it was likely to be supported or not. the kernel has come a long way.

since windows xp, it was sometimes necessary to call microsoft and activate your copy of windows. i found this appalling, even if it wasnt always the case. if i had a legally purchased copy of windows, then me calling them was none of their business. it increasingly felt like microsoft wanted as much control of my computer as i had. i was never going back.

id get a used machine with windows on it, and get a little nostalgic playing around a bit before installing some variety of gnu/linux on there. often this nostalgia was mixed with disgust. i love (love) old computers, but old microsoft operating systems dont do it for me like they used to. they feel a little fake, a little contrived, and very slightly evil. but that wont make a lot of sense without delving much farther into microsofts not-so-distant past.

i really have to say, that while i have no problem with people disliking microsoft (i do too, ive really barely mentioned other companies up to this point) its really not just microsoft thats like this. apple is like this, and sort of always was-- google is like this-- i dont even want android on anything at all-- many of us were excited that it used the linux kernel, but i loathe it.

around 2007 to 2008 i was collecting a variety of smaller distros and writing a friendly guide to all of them, so people might understand the differences and similarities between these easy but faster and lighter live distros. all you had to do was download one, put it on a cd and then boot your computer with the cd in it.

then i tried sugar on a stick. people from m.i.t. had a very interesting platform and concept designed around "constructionist learning." while i have never considered myself a direct proponent of constructionist learning-- in fact ive read very little about it-- it is a group of concepts that have given us logo, one-laptop-per-child, lego robots and modern logo derivatives (i would include minecraft and app inventor as modern logo derivatives.)

trying the sugar platform out of curiosity (which is very interesting, though i feel its a little too bloated) i moved to trisquel with sugar, a libre derivative of ubuntu that included the sugar platform-- and i tried out the very wonderful python gui for sugar, called pippy.

using pippy brought back my childhood a bit. for years, i had

tried dialect after dialect after dialect of basic, looking for a "basic programming language for the 21st century." in 2009, i started teaching myself python. the reason was the way pippy presented it-- in 2009 it made python about as friendly as qb was in the 80s and 90s.

originally i was sure id never be able to tolerate learning python, with mandatory whitespace and having to import everything that only required a command when i was a kid:

basic and qb (1980s-90s)

screen 9

pset(5, 5), 10

python (1990s, but caught on more later)

import pygame

pygame.init()

yourscreen = pygame.display.set_mode((640, 350))

yourscreen.set_at((5, 5), (85, 255, 85))

pygame.display.update()

raw_input()

believe it or not, i became very excited by python!

fig (2015, originally)

now pset 5 5 10

now lineinput

the other thing i noticed with trisquel was that my usb wireless network device (which i could use to add wi-fi to any computer that had a usb port, whether desktop or laptop) suddenly worked. id had it sitting around, i tried plugging it in and hey! now i have another wi-fi device that works.

i tried several libre distros, of which trisquel was one, but the gist of libre distros (the part that mattered most to me at least) was:

1. a version of the linux kernel that was completely free software

2. no non-free software in the live dvd or the repositories

debian 6.0 came very close to this-- with clear separation between repositories, no non-free software in the kernel or the live dvd, and no non-free repos enabled by default.

this is a primary goal of the free software movement, and

also an achievement of the free software movement.

and they did it, you can use a totally free (as in libre) operating system.

meanwhile, ubuntu was refusing to fix a bug related to hard drive settings that was destroying one of my laptop drives. the workaround wasnt perfect, but it would help lots of people. debian implemented it hastily, and i switched to debian (which ubuntu and trisquel and so many other distros are based on.)

that remained alright for years. debian was a very solid distro and in some ways, still is.

i moved farther north, got married, looked for ways to promote free software, started making my wife a little shell language that was easier to use than bash, tried teaching basic and python to friends, ran a gopher server, got divorced, dated a millionaire who encouraged me to do more teaching, took an introductory computer class for adults (i wanted to observe how it was taught) and fixed up some old pcs at the homeless shelter so that people could do job searches (and use facebook. there was a lot of facebook.)

the shelter machines ran debian-- originally id offered to install it on the computer in the office. it was very slow and who knows what was wrong with it-- i could have spent far more time fixing it, just to have the staff mess it up again-- or i could install debian and fix their problems. i installed debian.

the woman working in the office ran out and said "its so much faster!"

believing firmly in debian, with lots of reasons to think it was the best distro ever (at the time, it likely was) i was collecting machines that people didnt want anymore, installing debian on them and giving them away again.

people with windows problems, i didnt even bother there-- here, have a computer. the other was slow or something, fine. its a bit of an oversimplification, yes-- debian wont do literally everything windows does. not every single example.

you can run some windows software using wine in debian-- generally, you wont be satisfied with the results of that. i use wine for one or two things, mostly the windows command line (not for me, for development) though im not a huge fan. i get it, its alright. if it helps you quit using windows, great.

around 2015, a few things about debian started changing somewhat dramatically.

chapter 2: computing-- the takeover

in late 2014 i was having some interesting problems with a debian upgrade-- after some research, i traced the issue to systemd, a so-called "init system" (it is really not an init system) designed to replace the usual sysvinit, and a whole lot of other things.

at first i tried debian/kfreebsd, then i started paying attention to devuan, a debian fork, and devuan seemed very promising. around february of 2015, i installed devuan on my main development machine, converted my debian installation, and started on a journey away from a huge distro family i leaned on for at least a decade.

i tried several derivatives of devuan, including gnuinos and refracta. ive met the lead refracta developer in person more than once. i can honestly say that it is the greatest debian-based distro that ever existed, it is debian perfected as much as it possibly could be perfected at this point, although i have a slight bias as it includes a little of my own software.

seriously though, it was that good even before. fine, dont believe it, but its still true.

my experiences with gnuinos were also largely positive, refracta was simply closer to what i wanted.

lots of free software users are on a journey not unlike mine. freedom lets you go where you want, and with that freedom you may or may not find yourself exploring a lot of new places.

i would say that it is not only freedom that leads to such exploration, but a lack of hardware support-- sometimes a lack of software support, other times it was just a community i considered really bad. sometimes, communities and people get second chances, thats great. humanity writes good things off a little too quickly sometimes, even i do that now and then.

but i also think there is a free software diaspora going on-- people (users and developers) being driven from their homes for reasons that are partly new, and partly not new. having

known a number of homeless people, i am not in any way trying to make light of such things by comparing the free software situation to a diaspora. heck, years ago some people got together and made a facebook alternative called diaspora, only now the name fits so much better than ever.

i was originally an open source advocate. i found my way in

and out of that on my own.

linux was originally more modest. from the beginning, torvalds thought of it as an operating system. most of that work was already done, except for the kernel. the kernel is a very large thing that the rest of the software cannot run without-- at least not without an alternative kernel-- but stallman and torvalds have proven to have quite different philosophies.

where they agree is that the linux kernel should be under the gpl license. not only does it make it free software, it requires other contributors to make their contributions free as well. this is referred to as "copyleft," which like so many things said by free software advocates, is bound to offend someone if its taken too seriously.

open source does have a certain appeal. they play nicely with corporations if you find reassurance in that sort of thing, and from time to time they even slander and misrepresent all of the people from the movement they co-opted. after all, the people of the free software movement are not "team players," they say-- free software advocates continually refuse to join the team that is playing against their goal!

i chose open source because i thought it was better. a lot of my cynicism comes from the discrepancies between how open source bills itself and how they really do things. i took them at face value, and i think i was suckered. but whether thats my fault or theirs, they do some things i consider a bit dirty.

one thing is certain-- open source does not take free software (the movement or the concept) seriously, and free software does not take open source seriously either. there was a time when the two might appear compatible, but i dont know how anybody can mistake them for allies at this point-- they are different teams with different goals which are often mutually exclusive.

one of the jobs of open source, literally, is to make nice with corporations. if that were to happen accidentally, if corporations were to decide not to be monopolies and not to act like monopolies, that would be totally fine. one of the implied premises of open source seems to be "if youd just approach them nicely, theyll work with you!"

but if time proves anything, they really wont. monopolies want to control their customers. free software advocates dont want to be controlled. the idea of getting those two things to work for each other instead of against each other goes so far beyond optimism, it is surreal.

but if youll swallow it, thats what open source sets out to do. and they have had a great deal of success, in a way... being closer to companies that have large marketing departments

and having similar angles in their approach, its easier for open source to get "cool points," as long as "cool" is something that can be manufactured.

given that the goals of open source are far more lax than the goals of free software (free software has distinct goals, which if unmet are not considered a complete success) it is easier for open source to "win" because it is actually playing a different game.

if the rules of the free software game are "promote and use and contribute to a completely free software ecosystem," the rules of open source are "promote and maybe use open source software, conflate with free software whenever it suits you, but always point out that youre the same except better and more reasonable."

its a broader thing for sure, like the difference between vegan and "wednesdaytarian." one is definitely easier, and how important is the other one, really?

what bothers me is not that open source exists-- if that were all, it would be (and sort of is) real progress for microsoft and

apple as companies.

i think people should consider however, that free software and open source really start from opposite ends of the spectrum.

free software demands freedom, and tries to woo users first to choose it themselves.

open source demands little or nothing, and tries to woo corporations first to offer freedom to some degree.

free software continually stands (at least until very recently, where it does so a bit less, but still more than anybody else) for the same sorts of things. open source frequently claims this makes free software unreasonable and too idealistic.

free software urges people to consider things in philosophical and ethical terms. open source urges people to consider things in purely "practical" terms, as if thats not a philosophy as well. (philosophy doesnt get cool points.)

the most "open source" concept of practicality is a philosophy of its own. it might be cynical, it might be unrealistic, but it is certainly a philosophy. the thing about philosophy though, is that it can be debated. when the free software movement presents their arguments, they are opening a debate whether they participate further or not. open source leans more towards branding and ad hom, and belittles the philosophical and ethical sides of free software.

if you notice a clash between nerds and "popular" types, there is one. though i would say beyond that obvious, superficial layer its closer to social movement vs. marketing.

as for practical concerns, stallman retorts that not having to work with digital handcuffs on is very practical.

but my biggest problems with open source are the way they constantly rewrite history. torvalds writes a kernel, calls it an operating system, suddenly "linux" is his invention even though it predates him and includes so many things that exist because free software brought them about.

linus torvalds did "invent linux" the kernel, but he most certainly did not invent "linux" the operating system, and yet most open source supporters are happy to let everybody think he did. many arent even aware that this is not what happened, they really think "linux" (the "operating system") started not with an effort to make everyone free in their computing, but they think it was started in finland by a lone university student.

sometimes credit is given, often fanboys (and fangirls) are the problem, though the history of "linux" the "operating system" is one that glosses over the purpose for which so many vital parts of it were written. it is a co-opting of a movement for commercial gain, as well as the opportunity to retell the story.

if you make it about nothing more than credit (like author credits) and percentages, its certainly a lot more work to make this point-- and arguments for cutting free software out of the history of free software are generally constructed along such lines. "we are like free software, only better" is an opportunity to speak for someone else, while cutting them out of the conversation.

note again that i started on the side doing this, and the more i watched it happen, the more i was confused by it and the more unfair i thought it was. i started out feeling like open source had honest intentions, and im sure many people do.

rather than take a moral stance on whether to include "gnu" or not, i will go for "cool points" or perhaps "libre cool points" and say that its a pretty reliable mark of quality and commitment to you-- at least, of very particular qualities and commitments. its useful to people who wish to signify those.

but no matter what you call it, i find the rhetoric by open source against free software unfair and superficial. that includes the ridiculous attacks from linus torvalds about how free software is "about hate." i doubt he even considers how he is conflating the two things, but what hes really doing is trying to pin being "hateful" on a group of people just for caring about a particular issue more than he does.

free software advocates are no more "hateful" than open source fans. they just dont like being kicked around by corporations. if youre watching from outside all this, its actually pretty difficult to get to an honest or fair assessment

through all the regular smearing from open source. the tech press leans heavily in their favour-- free software, which exists to stand against monopolies-- wont get any credit for being "reasonable" until they soften their goals to be friendly to monopolies.

until then, torvalds himself implies that theres no difference between hating an entire group of people and hating the bad things a corporation does. when he is doing this to represent open source and criticise free software, i think its a dirty tactic.

i have found that most free software advocates are relatively "boring" when it comes to this. by boring, i mean their arguments are closer to being logical and fair. the nerd humour that goes into some of them (stallmans constant parodying of brand names, like referring to the amazon kindle as the "swindle") is not appreciated by everyone, but then-- open source ultimately needs customers. free software literally can achieve what it wants to with only nerds. being cool is alright, but its not required.

i find myself more in the middle with this. first, i will never claim allegiance with open source again. every time they lend a hand, something else happens along with it. i think thats really the best way to put it.

if i say that not all open source advocates are bad, im not

trying to be kind to "open source" itself. i think open source is destructive, unkind and unfair.

i dont think all of its advocates are. unlike free software, open source treats "marketshare" as a real issue. they want to "win," even if they play by different rules than the ones free software holds itself to. they want to be on the television, in the news, they want you to use their software.

i dont blame them for that alone. free software advocates do say that one way you can help is "use free software." its true. you are far more likely to benefit the free software movement if you actually use free software.

if you buy a mac, it has the free software foundations command line (bash) and actually uses quite a lot of free software, including stallmans own emacs editor, compiled with stallmans gcc compiler, and many other free software tools from bsd. even the macos kernel ("darwin") is free software.

however, if you buy a mac, you are supporting apples efforts to control the application ecosystem with a censored app store, and devices that try to control whether you can install 3rd party applications or not. apple could use an entirely free operating system (indeed, they have contributed a great deal of code as free software) but they are still monopolistic. so the mac is a triumph of open source, less of free software.

but whether youre "cool" or a "dork" or youre like me, and it depends on the day (thats a lie, its dork4life) its not really about the image marketers try to saddle you with, its about what you do and what your primary motivation is.

open source claims to be politically correct, but its proponents chide free software for being "autistic" and "neckbeards." for all its political correctness, i suppose if youre a sikh or have autism youre just going to have to get used to open source making fun of people for being like you.

free software isnt politically correct, and i find its efforts to become politically correct awkward and frequently one-sided. to be fair, most or all efforts are become politically correct are awkward and frequently one-sided, which is why some people dislike political correctness and dont believe in it.

my goal here is not to prove anything about open source at all-- i dont have the resources or the interest in devoting myself to "debunking open source" the way atheists find pleasure in debunking creationists.

i do comment, because open source openly and publicly (and routinely) attacks free software. it is appropriate to respond to that, and maybe its something i should elaborate on even further. but if you read this, then at least someone warned you-- they described the rhetoric and doublespeak that youll encounter. its a start.

as to what to do about open source, id recommend pointing out the falsity of their rhetoric to people who are considering it for the first time-- id recommend citing your negative experiences with open source, but above all id recommend standing against the things they do to undermine software freedom.

most people in the world really wont take interest or feel like you or i do about dishonest exchanges in the arguments between two rival philosophies. so what? this is something that mostly matters to just free software advocates and open source advocates. everybody should know that.

but the things open source does, the organised efforts of open source against the efforts of the free software movement-- those are individually, different things. each one of those efforts can be examined, evaluated for what it does against free software, and then stood against.

it is probably inefficient to stand against open source as a movement and even the dishonesty of open source (im not saying dont call them out, im saying dont bother becoming a full time "open source debunker." at least, i dont think it would really help, and its not what im recommending.)

open source is the force behind redix however, and redix is something worth standing against. how do you stand against redix?

probably you try to remove it from your computer, just like you would try to remove non-free software if you decided to.

so now that weve described mostly the politics going on every day in this wild free software world, lets talk about the results: posix is a standard many computer users are not aware exists, but for 30 years it has helped hold a variety of unix-like operating systems together: including macos and gnu/linux and bsd.

even if no operating system were 100% posix compliant, as a technical reference point it has prevented these systems from drifting too far apart. once at the command line, i barely had to "learn macos" just because of how much it had in common with other systems id used.

posix is a standard that affects both free software and corporate monopolies. redix is an effort to flee from it.

i do not believe that all free software must comply with posix, that is very far from the point. if you want a posix alternative, theres nothing wrong with that. the real problem with redix is that it is monopolistic. while even the most prominent monopolies like microsoft and apple have toyed with posix compatibility (and grown more reliable as operating systems in the process,) redix has quickly worked to establish itself as a way to gut and replace posix systems, not to provide an alternative.

the long-lasting instability redix has caused in the free software ecosystem took hold of debian and most derivatives in late 2014, but this goes far beyond debian and may possibly find its way into the linux kernel.

all of this is fixable, but not if its simply allowed to happen without greater awareness.

so far, the free software foundation has done practically nothing about this. if corporations use redix as way to set free software back, if they use it to recreate free software in ways that are much more difficult for people to maintain without corporate monopolies, then the free software movement is seriously undermined and has finally met its match. and so far, the free software foundation has scarcely mentioned this problem, though others have pointed it out.

we are in a very interesting, somewhat ugly chapter of free software history, where the real free software movement is doing too little, the open source movement has "practical" (as in freedom) reasons to move in two different directions and split into something that cares even less about freedom, and something else that cares about it more-- and free software is less likely to split because of what resulted from it the last time that happened.

whether different groups work together or not, redix has already set free software development back for at least 3 years. this is most obvious to critics of systemd, though as much as it goes beyond just debian, it goes far beyond just systemd.

redix is a serious threat to free software, the fsf needs to do something, and i dont think they are doing enough.

no, i really dont think theyre doing enough. have they done enough in the past, to warrant their existence? oh wow, absolutely. if not for the fsf i wouldnt be typing this in openoffice

(libreoffice is better in every single way, except i find it irritating to use sometimes. i switched to libreoffice when oracle bought openoffice, i didnt use openoffice again until it was apache openoffice.) i wouldnt be using a completely-free kernel, i would probably be swearing at a windows machine or using a decade-old mac. (they also include a python interpreter.)

we can talk about this in terms of fault, we can point fingers and compete to determine the parties most responsible (spoiler: open source was trusted a little too much) but id warn against that crusade, because while youre losing a decades-old, unwinnable (and somewhat rigged) debate, the real problem is progressing, which is redix.

as long as redix is an alternative, its not that bad. but redix is

not an alternative-- it is a corporate cuckoo egg sitting in the middle of all free softwares nest, and systemd is the beak poking out of it.

we have two choices, which are to start doing something now or do something later. years into this, we have already taken damage. how much metadata about free software development did github have prior to its purchase? how many people have fled github, continuing the diaspora that systemd caused with debian?

debian fans will laugh and say "what diaspora?" github users will laugh as well. but what percentage of developers are being scattered by all of this put together? would you call that a setback for development?

ive watched developers scrambling since 2014 to fix the mess it has made, but they need to figure out what the mess is and why it isnt getting better. i think many people have some part of the puzzle.

redix is real, dont wait another third of a decade to maintain autonomy from monopolies. what is software freedom, really? it is autonomy from digital monopolies, not just some philosophical ideal. its like not having to work with handcuffs on, or not having strangers make the rules for your computing in a board room.

and for all thats already been accomplished, much more can be done to promote and preserve free software than either the fsf or open source are willing to consider at this time.

do you want to spend the next 10 years writing around systemd? do you think systemd is the only thing like it thats disrupting free software development? i dont dispute that its the most prominent, but i doubt its the only thing you need to be concerned about.

do you believe bryan lunduke when he says (in a rehash of an article open source puts out by someone every 5-10 years or so) that whats killing "open source" is that people disagree too much? i think he may have it completely backwards.

if everyone is free and can generally do what they want, the only people disagreeing are those who feel their freedom is threatened or those trying to threaten it-- they have good cause to disagree. while thats an oversimplification, "open source" fights philosophy with sophistry. except for redix which is relatively new, their tactics rarely change.

chapter 3: the end of the distro

when i started using computers, a hard drive was optional. today, it is again. gnu/linux users have the option of booting from an installed operating system or a "live" bootable version.

some live versions are designed as a way to try out the distro before installing, and others are made for users that actually prefer to boot live. usually, live distros offer one or more ways to install to the hard drive if desired.

we started using hard drives because they were faster, larger in capacity, and more reliable if you left them in. leaving a floppy in the drive is arguably good for the drive heads (at least while moving the computer) though its still bad for the floppy. installing to the hard drive saved you a step during boot-up.

programs were commands, some built into the shell and some were external. but you could use the computer with fewer than ten files on any media.

the larger sizes of drives allowed software companies to offer larger and larger programs, with more features. and if you want to sell 3 programs to everyone, its easier to bundle those programs and charge three times than it is to sell them separately and charge one time each. even if the program is free– youre more likely to get all 3 programs if theyre together than separate.

soon you needed 5 floppies to install windows, and then you needed a cd to install it. and people figured out that instead of putting the same code in every program, things that were going to be used by lots of programs could go in a separate file called a library.

now to run a program, it needed to know where other files were located just to start. the easiest way to do that is to put the files in a certain place for it to load them, and now weve got installers to put some files here, some here, other files here…

it would often do to have a program in a single file; but

sometimes the licensing didnt allow, and this was another reason to use libraries.

many people would put all the files for a program in a folder, zip up the folder into an archive, and distribute the program as an archive file. the user would “install” just by unzipping the file into a folder. this is still done with many programs today.

a computer needs to be bootable, the media to install that system either needs to be bootable or the installer needs to run on a compatible, already-bootable system– if youre doing recovery, you might as well have the tools on bootable media– and at least if you install, there probably needs to be a way to add programs.

dos, modern windows, mac os, gnu/linux and bsd all have some form of operating system kernel and bootloader; since for gnu/linux these things have to be compiled from source to work well with the kernel, the easiest way to distribute gnu/linux to everyday users is to compile all of it and share the compiled version . you can also just put the source online to let users worry about it, but far fewer people would use it.

the most popular and flexible bootloader is grub, which can be installed from practically any distro. other good

bootloaders exist, though grub is the most familiar choice. it will point to where the kernel and "initrd" are, so what a distro really needs is:

1. a kernel

2. an init system (optional) and initrd or initramfs (the files needed for booting up)

3. a collection of files, usually compressed in a squashfs to download faster and fit better/write faster to boot media

although you can theoretically get away with omitting some of the above options, no one is suggesting that a serious distribution is likely to do so.

add to the list:

a pre-compiled collection of packages available for download, although some major offerings (like browsers) offer pre-compiled software on their own website, compiled by the author or maintainer.

source-based distros that focus on the user being able to

configure and compile their own packages can produce a faster, leaner system, but will probably never be a mainstream option for users.

more likely than a future where everyone compiles their own operating system (nothing against gentoo or anything like it; compiling your own system is your right and a wonderful educational experience) is a future where everyone exclusively uses tools written in javascript or python. these are interpreted (rather than compiled) languages, so you can run them directly from their source.

i dont think either scenario is very likely, which means that part of the operating system development ecosystem will always be pre-compiled binaries– hopefully with source

available and under free licenses.

how we get those pre-compiled binaries will be the major factor in how long the contemporary “distro” is a mainstream solution. i believe “enterprise” distributions will be the last to go-- there may always be enterprise distributions, for the same reasons they exist now.

with the amount of quality control and politics that goes into a

distro like debian, the ability of a debian-like distro (traditionally a very solid option, in my opinion) may fail to meet the needs of so many organisations. so marketshare and suitability will probably open up somewhat to alternatives that give an organisation a little more control– always with good defaults of course-- for those who dont want or need to bother.

though all are configurable, mainstream and enterprise distros can always cater to people that want decisions pre-made, while source-based distros cater best to people that want to make a lot of decisions.

everyone in between who wants control and choices over their system, yet doesnt want to necessarily make 100 different decisions at install time– are the group least catered to.

getting away from the distro concept and towards more open (and widespread) collaboration would mean:

1. less inherent management from a collection of amateur bureaucrats and (sometimes) bullies .

2. more reliance on “what to include” from users (of all distros) and some increased need for “what to trust” in terms

of sources .

3. looser integration (but broader compatibility) of very basic and broadly used tools like bootloaders and installers .

any details on how those things are likely to come about may well differ from the actual ways they will do so:

1. distros decide what packages go into repos, but not what packages people will run.

2. distros decide what packages go into the boot/install/live .isos, but not what packages people will keep .

3. distros that start with a smaller profile and allow you to “build up” are very useful, but not necessarily as popular– although debian and many other distros of various sizes often offer a “netboot” option that serves this purpose–

reinforcing the notion that such things are useful.

(since writing the above paragraph a few months ago, i have started lending a hand to a similar, very small-profile distro.)

for any very popular distro to succeed, a large collection of software available at install time seems to be a widely desired option.

when a fundamental decision about what goes into such a collection changes, an organisation or user will decide whether to work around the change if its trivial, or switch to a distro that does more of what they need “out of the box.”

some distros offer different “flavours” to reduce this need, such as ubuntu with xubuntu and kubuntu (and lubuntu) with gnome, xfce, kde (and lxde) respectively; although other considerations might include:

1. what browser to include (browsers need to update often anyway, so this can be a losing battle)

2. what kernel to include (also whether software is very stable and reliable, or cutting edge)

3. what init system to offer

certainly at the outset, most people dont want to make these

decisions. a distro is a traditional solution to this. what goes wrong a little too often though, is not this nearly-strawman problematic scenario:

gosh, im in charge of i.t. but i want to recommend debian and it just doesnt give me enough setup options.”

while this can actually happen:

im in charge of i.t. but political decisions at debian are screwing up everything on our end, and we are wasting valuable hours working around the things theyve decided to do– our options are increasingly moving towards changing distros. but where to go now?”

this is the real-life scenario where a distro fails “too much,” and the people in this scenario are likely to move to a different distro, not create their own version of it. but as the number of users continues to increase, the distro concept will possibly (no numbers on this, it just seems like a trend) force people to do more and more migrating to new distros and new packages, as the ones they are used to become unsuitable.

we know that this actually happens, what we dont know is how much it happens (or whether its increasing, but it seems

to be increasing at lot over the past few years.)

more often, this happens to the user-- and who really cares about a single user? we dont really have to think about them.

what no one knows for sure is how many of these “individual users” would be better off with more options, without having to create their own distro from the ground up.

in some ways, those individuals have more options than ever– both in a good way and bad. the good part is that there are more tools than ever for them to find a solution that fixes these problems.

the bad news is that it is increasingly necessary for many of us to find such options. perhaps these are growing pains that result from more and more people using gnu/linux, and

organisations like debian struggling to manage them.

when management fails, it reasserts itself in aggressive (or being management, passive aggressive) ways towards everyone else.

this is my experience with debian over months (and ultimately more than one release cycle) that caused me to swear off it forever. it was not just the choices they made– but the extremely poor management, process and attitudes associated with those (i believe very poor) choices.

while many people are still happy with debian, and other suitable distros exist– i chose debian because i believed it was so reliable that i could continue using debian for years and years.

i do not believe these problems are limited to just debian, either. i have found similar problems and behaviour over the years with ubuntu, puppy and smaller, less popular distros. but when youve long considered debian the best distro, and the best distro approved by the fsf is also made from debian, then a change is a big deal.

i also think these problems have existed for a long time, theyve gotten worse and will continue to get worse and more easily and frequently noticed– before they get better.

and im interested in both “big picture” changes and smaller

(but vital) details, and both taking the problems apart to find out how they arise– and what future things may help to solve these problems.

anything that dismantles the distro concept and transforms it with automation is a step towards what i call "the end of the distro"– because when its really automated, we dont need the throngs of people that are screwing things up right now.

they will tell you 100 times. “all you have to do is become a developer!” join our team! …like developers never leave something like debian in utter disgust– maybe we can put

that farce to bed once and for all-- you have to be more than a “mere” developer if you want to be guaranteed dignity, fairness and control over your computers.

and thats going to become more of a fixable problem. you dont have to give up on the distro, but some people likely will, in order to maintain the distro concept as a solution that really works.

so if not a distro, then what?

if we are talking less about “distros” and more about tools, for example to put custom images together, then many of these tools exist, and how we use them (and move forward with

their development) decides how much ending the distro is possible or even desirable.

people would still make things that are basically similar, but they could at least potentially end the cultural significance of the distro– the branding (which isnt all bad, not at all) the politics and the organisational mess of the modern gnu/linux distribution.

these tools can offer a lot more in the way of automation, making it easier to customise everything, and anything– making it easier for inexperienced users to make fairly significant changes (again, such tools already exist, theyre likely to improve) and automation that makes it easier to ultimately pool larger amounts of change into easier-to-use um, what should they be called, plugins?

i am presenting this as a philosophy, even though it will come down to tools. regarding software freedom, i very much believe this is good for the user and gives the user far more power, which some people will say is “too much” although the user already has the power to install hundreds of distros and wipe their hard drive.

and they already have the ability to make a small number of changes to any distribution.

while i think the point of the philosophy is potentially very

good for software freedom, what im most interested in is making the large (you could say bloated) organisations that form around creating distributions to be increasingly obsolete. obsolete doesnt mean it stops being possible– you can still type up your programs and search duckduckgo from an asr 33 teletype, if it pleases you to.

for now, it takes a large group of people to create something like debian; and i believe it will continue to for some time. “the end of the distro” is about alternatives, both practical and hypothetical, and some that are very possible.

is it possible to automate distro building to the point where fewer than 10 people can create something on par with debian? if debian continues to make decisions like it has for the past few years, i believe so.

for years now a handful of people have worked to create a shift in these tools and their application to allow smaller and smaller groups of people to accomplish more and more work with regards to putting a software distribution together.

and if done right, we could continue on this theme until a day when most users (rather than a few) do not even require a software distribution in order to have an .iso customised to their needs. instead of building a distro, they can use plugins. i dont mean packages, but addons that change the distro.

some fans of smaller, lighter live distros have more or less boasted about things like this for years. theyre exaggerating, but not so much that we cant bring their boasts even closer to reality– and sidestep a lot of destructive politics in the process.

politics are unavoidable, ultimately– theyre very important to

what goes into applications, but when the politics threaten an application there are often people ready to fork it and move forward. this is so much more work when the effort is a distro. its easier (probably in most instances) to fork an application than a distro, especially a simple application vs. a “simple” distro.

and its not really that distros have to be so much more sophisticated than applications– but we can certainly create tools that give users more power, without relying on the amount of organisation that large distros presently require.

automation will be used increasingly in distro building either way. the real shift will come if we use that automation to serve the user, as much as the organisation.

debian will likely move in this direction as well. debian doesnt have the sort of infrastructure that microsoft or apple has, and users dont have the sort of infrastructure that debian has. though i think users can compete with debian to a degree at least as great as the one where debian is

competing with microsoft.

the best automation is for the purpose of autonomy, and theres more of it all the time. ubuntu was using it already to make processes more efficient than debians– and devuan is trying to make the effort required by real people small enough to make debian (potentially) obsolete.

i dont believe they will meet the expectations of users, only because their politics (in practice) arent too much improved over debians, if they are improved. debian was very good for 20 years. now that its nearly 25, i no longer believe.

in short, the first step is to go from just building distros, to

building distro factories. and then a step further, from building distro factories to creating what i will call “distro 3d printers.” tools that are designed for creating a distro, but with no more required “fiddling” than a 3d printer needs.

those (software) tools dont exist yet, but lots of people want one. if we move in the right direction, away from massive organisations like debian and into smaller, more easily managed organisations that allow greater autonomy and require fewer corporate sponsors (im not against corporate sponsorship altogether– send money! go right ahead!)

sponsorship is fine, but there used to be fewer strings attached– the more sponsors you require to get a job done, the more they feel like they can ask, and the less autonomy you ultimately have.

aiming for the scale of a “one-man distro” is beneficial, even if it ultimately requires a team of say, 10 people to really get a quality distro moving-- preferably 10 non-experts, at that.

which types of distros would benefit from full automation, or reduction to a build and remastering application?

assuming that a move to automated build/remaster applications/clusters did replace distros for the most part, the first distros to “evolve” would be small live distros, like slitaz and puppy. they remix the most easily, typically having the smallest infrastructure to begin with.

the next class of distro most likely to benefit is fsf-approved type distros, where it probably makes lots of sense to have a tool that goes through, vets components as free/approved and removes or modifies things accordingly.

every step that isnt automated will ultimately risk being done more than once manually. this would allow more “100% free” distros to gain traction, though personally i am no longer interested in the linux-libre kernel. i do prefer a blob-free kernel, like debian has. automation can give users options.

100% free distros are often playing “catch up” to their mostly-free counterparts, and wasting time is costly; automation could help a lot. (since writing this, i have looked into how much automation trisquel and gnewsense are using, and they are in fact automating a lot already.)

its worth pointing out that when a distro becomes something even a noob can download and put together without a community the size of debians, that the distro may lose some of its identity or “special sauce.” so slitaz would perhaps become more relevant and more flexible, but fewer people might know “slitaz” because a lot of the people using it are now calling it something else.

this could create a shift away from distro branding (“whats centos?”) and towards the tools used to remix and produce them (“top hat? yeah, i used it to make a bunch of .isos last year.”)

the next distros to lose some of their power would be the large ones, though perhaps debian would regain some of its reputation as “the” distro made for making distros from.

or devuan would get that– but in the long run, such tools may mean that another devuan was never needed, because what devuan does now becomes something anyone could do. i honestly dont think denis roio would have a problem with that. i think his ambitions for autonomy are sincere.

and the last bastion of the “linux distro” would be the “enterprise” offerings, with their agreements with microsoft about ptab, i mean alice, i mean: lots of software patents which we will refer to generally without naming.

since it has the “windows subsystem for linux,” this “sort of” includes windows itself, but not really. obviously, no ones going to remix that (oh alright, bartpe?) but as long as enough proprietary software is involved, remixing is going to be somewhat limited. when i said windows was a distro, im pretty sure that was tongue-in-cheek.

distros emulate monopolies; microsoft was a real monopoly; today they have to actually compete with apple and google. but whether these are really monopolies or not, they all do things that are monopolistic– they would be happy to operate as monopolies if they could.

whether you want to compare distros to these companies or you want to compare them to record labels, the comparison is not as superficial as it will sound at first.

ive made a number of sound predictions about tech over the years. things that just made sense to me (a lot of it based on storage chips) actually exist now. im surprised the most that fpgas exist. i loved the idea but i thought they were way too idealistic for anybody to actually develop and manufacture them.

fpgas are like the boot floppies of cpus (of course they do a lot more than cpu emulation) and live dvds and live usbs are like the boot floppies of today. but its sort of a pain (for most people) to add and remove programs from these images.

if you use a smaller live distro, youll want to tell me about your favorite one. ive remixed puppy enough that i would be happy to take it over, but that will certainly never happen.

people used to love albums, and i know some people still do. for the youtube generation: an album is a collection of songs in a fixed set– sort of like a playlist but more top-down than that…

for records and cassettes there were two “sides” and you had to flip them to listen to the other side. then the compact disc came out and voila– no more flipping. all the songs were on one side.

then napster came out (ok, a little before napster) and everyone started listening to mp3s. people already made “mix tapes” for fun, which were like the boot floppies of music but the music typically came in the form of an album. and the funny thing is, most of the people who non-commercially “pirate” music:

according to the bi norwegian school of management, those who download ‘free’ music are actually also 10 times more likely to pay for music downloads than those who don’t” https://www.techradar.com/news/internet/online-music-pirates-buy-the-most-music-593366

but downloading changed something fundamentally-- the album:

“…be honest. when was the last time you really listened to an album all the way through, from start to finish, without interruption?” https://www.npr.org/sections/allsongs/2013/05/20/185534315/do-you-really-listen-to-full-albums

lots of politics go into the track listing– and also which songs make the cut. and that sort of makes sense, each one is a “product” and you can only fit so many, so yeah, you expect a very large company with a contract to have a say in that.

sometimes the artists like it, sometimes they dont.

with a distro: sometimes the users like it, sometimes they dont.

but the record label model is also producing all the ridiculous garbage you hear on the radio now, while so many businesses (and other people) have switched to things like pandora– where listeners (or at least the people playing the music in their business) have more say in what plays.

of course radio stations rarely play full albums either. their model is singles. but more than a decade ago when ubuntu was just breezy, thom yorke said:

i like the people at our record company, but the time is at hand when you have to ask why anyone needs one.”

i honestly wonder the same thing about distros.

in the short run, there is no way around them– it is trivial for a developer to host a software package. really, if you have a website or github– there, done. hosted.

one handy thing that distros do is take the big pile of software they want to put on the “a-side” and make it “ready to play” or you know, bootable. but even for a live cd, thats not as magical as you think.

as for the installer, the options are getting good enough that a distro doesnt need its own “in-house” installer. https://fullcirclemagazine.org/2018/03/01/calamares-3-2-linux-installer-will-integrate-a-module-for-the-kde-plasma-desktop/

so far ive heard mixed things about calamares, though im a refracta fan and its installer works great (in my opinion.) calamares certainly has some “nice touches” that will appeal to many, i think its bloat. but i think its great that we have these options.

the wallpaper and branding are lovely, if you want “added value” for your distro– sort of like the cover art everyone was going to miss out on when they got mp3s instead of albums.

dont hate me too much; i actually agree that cover art is a real loss. but my point is that it wasnt enough to keep albums from becoming a secondary way of obtaining or listening to music.

whats left that we really need distros for, after you subtract the branding and the installer and the live edition? (which some installers can produce automatically, or certainly will be able to.)

* promotion

* keeping binaries compatible / maintaining repos

* tech support

tech support does seem like a very valuable selling point– particularly when it makes red hat so much money. however, red hat isnt as good as an example as it seems. for someone to explain this, go bother steve litt. ask him why red hats tech support… just go talk to steve. he will love explaining it.

the thing that makes me very sceptical about tech support, is that so often its just people doing one of these:

* telling someone to read the manual (less often now)

* helping someone find something in the manual

* googling it for them / linking to an off-site url

* figuring it out themselves (without using any knowledge special to the distro itself)

* (terribly common) finding it on the nice wiki for a different distro!

so we may not need distros, but we definitely need wikis.

but who will host or maintain”– no, really, thats not why we still have distros.

promotion– hmm, out of sheer laziness im going to pretend this is a vital thing that makes the distro concept/community completely indispensable despite all other things being doable without one.

which brings me to:

* keeping binaries compatible / maintaining repos

now i glossed over the little “extras” that a distro adds in terms of software. thats probably the second most valuable thing a distro offers, but im only going to count it as half.

some little utilities get their start on a particular distro, but so many either get abandoned, replaced, or make it further than inclusion in a single distro. so yeah, lets count this one part way.

ahem:

* keeping binaries compatible / maintaining repos

this is the big one. with all the other stuff that a distro can do, this is the one we still need distros for.

thats the giant pillar holding up the distro/album as a concept. and as long as theyre doing that, they might as well produce a “track listing” of officially supported, officially included packages, which distros do (normally packaged with an installer and/or live version.)

and that will certainly keep distros relevant for some time; i think the original “distro” was actually more of a repo/collection than the thing we know today. what would you say it was, sls?

i think the modern concept of an all-inclusive modern distro is actually a giant step forward from all that, but i also think we have taken too many steps backwards over the years.

its the open source broadcasting networks time to scare everyone about “open source being torn apart by politics” again-- happens every few years and is always good for a laugh, but honestly– tearing open source apart is great!

mozilla was “TORN APART!” from netscape, libreoffice was TORN APART! from openoffice, webkit was TORN APART! from khtml, and each of these are better than what came before. whats the big deal there?

robert shingledecker has torn apart from dsl, several people have torn apart from debian, i honestly wonder what the real advantage (other than purely theoretical) of keeping everything in one giant royal family is supposed to be?

bsd used to claim that they didnt do distros, but there are certainly more of those bsd non-distros than there used to be.

has anyone else decided like i have that all of this “tearing apart” is actually progress? the alternative is that every distro in existence stays at home and lives with its parents. is that what bryan lunduke really wants? hey, its what bill gates would want. and to be fair, hes like really successful.

gnu/linux politics are tearing our fake windows version apart!”

ok, ok, thats alright though. that was always ok. we dont need fake windows, that was always a gimmick.

folks, keep making your fake windows if its what you really want to do.

and again, gentoo, source-based distros… i dont use your stuff, im also not talking so much about you. i actually like the pre-built binaries part of binary distros, but one of these days theres going to be a bit of a change in how all that works.

why a change? because:

1. people lining up to beg for their favorite song to get included on an album isnt going to outlast it becoming really trivial for everyone to make their own album. yes, people will still do that but with smaller communities that dont need giant overhead and that can split off any time, with ease.

2. modifying an existing iso isnt hard, its just tedious. when that goes from easy to simple too (no, we arent there yet.)

no i dont mean it takes you more than a minute or two, i mean that what you do in a minute or two really builds up over time if you continue doing it with a conventional remaster tool. a better more universal more automated more powerful remaster tool could be a game changer. slax offered some futuristic examples, though thats not necessarily exactly…

3. distros are getting pushier, users are getting whingier, and if we owe them anything (thats probably the wrong phrase– if we want to help them…) we really need to get back to teaching people how to fish.

im not saying “everyone use gentoo” but i am saying that distros are getting putrid and users are getting so lazy that it hurts them. and by the time more people notice, it will be ten times worse.

the time to head this off is now, but that means figuring out solutions whenever we can.

im actually not pessimistic; i think its sort of inevitable that

this stage of distro development is going to lead to a different one. ten years from now, isnt it always sort of different?

well, maybe not every ten years. i think in the last ten years, weve seen a lot of interesting new problems, including the ones im talking about– and a lot of interesting new solutions– which we havent all applied to the problems just yet.

but they will be solutions that create options and freedom, more than they create (or require) agreement from too many people.

they wont necessarily create “marketshare.” to paraphrase bill gates, measuring software freedom in just marketshare is like judging an airplanes design just by how much it weighs.

why is it anyway, that when it comes to vehicles everyone wants to drive a car– but when it comes to computing, everyone wants to ride the bus?

also: you should always be able to uninstall software you dont want. one of my favourite quotes from all gnu/linux history is from the founder of canonical saying:

dont trust us? erm, we have root. you do trust us with your data already. you trust us not to screw up on your machine

with every update.”

he seemed to be implying that trust was a blank cheque, making trust an either/or proposition– sort of like if you give me keys to your house, i go in and shave your dog bald, and when you complain i reply,

dont trust me? i have keys! you trust me not to mess with stuff in your house every time you go out.”

well the problem is that putting that level of trust in someone doesnt mean its a lifetime supply of trust– after youve earned it, you have to take care with it.

and ubuntu was not taking care when they created unity lens. the comment about having “root” sidestepped this in a clever way, but its an extremely dubious response.

now if you did give me keys to your house, and i decided to go inside and cover your living room floor in jelly, then surely you would consider changing the locks to your doors. if the locksmith was on their overtime schedule, or you had to wait for your next paycheck to cover the costs, perhaps you would be looking for an interim solution. at the very least, you would want to remove the jelly before it made things worse.

in fairness to ubuntu, the jelly in this story was very easy to remove. and kudos to ubuntu for that, even if they shouldnt have put it there in the first place.

but sometimes, the jelly is not so easy to remove. and this is a fairly large problem that people often put off considering the costs of.

i really only want software that is invited. if you are like microsoft, and decide you can install anything you want whenever it pleases you to, i am going to remove windows altogether. this was a huge part of why i was eager to get rid of windows– to change the locks to my house as it were.

windows made a mess, stayed up late and was noisy and inconsiderate, wasted my time with updates on its schedule and not mine, and made it more difficult to do things even when i had left the computer in a state that i was happy enough with for my own purposes.

windows treated my computer like it belonged to microsoft, and not to me. and thats not forgivable in an operating system. honest mistakes are one thing, but windows does this on purpose. as long as windows is easy to remove, the solution is obvious.

but with uefi and secure boot, windows is a little bit harder to remove. oh thank you so much, intel!

nonetheless, windows is not on my computer. and these days, ubuntu isnt either. but a few years ago, systemd quietly snuck into my computer while posing as a debian update. this was partly my fault, i should have paid more attention.

but the bastard hasnt left since then.

this isnt entirely about systemd, because the lesson here is more universal– you shouldnt couple everything to everything else in a vast web of binary bs.

and you can remove systemd, though many have tried, and most find that even if they succeed, systemd keeps getting hooked from more and more things– whether at the package level (apt dependencies, which have gotten more entrenched over a few years) or the library or framework level– firefox wants pulseaudio, pulseaudio wants systemd, etc.

when you talk about these details, some get refuted and some people say youre being sloppy and getting things wrong. however, the list of things that have tried or succeeded in pulling in systemd or part of systemd is

enormous. it also varies per distro, meaning that some distros are contributing to the problem themselves.

i do not have systemd on my computer, but i have text files/config files that are supposed to be for software that i know i dont have installed, but when i remove them, some things stop working properly.

thats kind of weird.

i should really note right now, that part of the reason i got away from windows many years ago (and chose debian originally) was that they were pretty good about stuff like this– in other words, i went for many years with my living room jelly-free.

now that ubuntu and debian have proven that i cant trust them in my house, i am forced to admit that i have to use something. i mean i could just remove the entire living room, but thats exactly what i am trying to avoid.

surely, this can be fixed. but all technical issues aside, for me this is political.

of all the free software projects ive ever used, only the developers and fans of TWO–

TWO PIECES OF SOFTWARE, EVER–

have ever told me that it would basically be impossible to remove their stuff from my machine without breaking it.

is it ubuntu? no, i can remove their spyware very easily (but the problem is they put it in ubuntu in the first place) and i can remove unity very easily– the desktop their spyware was made for– and i can remove ubuntu itself. they never thumbed their nose at me and said “ha! good luck removing our spyware. youre stuck with it.”

is it torvalds? no– he knows that i could always replace the linux kernel with bsd, or even hurd (i dont think anybody is worried about me making the switch to hurd though.)

what about firefox? with firefox i admit, its very difficult to find good alternatives– no matter how desperate you are to find them– though people do remove it and install other things all the time, so now theres at least one thing we cant blame on firefox.

but only gnome and systemd fans/devs have ever made this arrogant claim to me: youre stuck with our software, go suck an egg.

well, if thats your attitude.

needless to say, this has soured me forever on gnome. what am i willing to tolerate? gtk and evince. evince is replaceable (go ahead and tempt me further, fanboys) but though i do hate myself for saying it– evince is a pretty good pdf viewer.

[note: since writing this, i have switched away from evince.]

if youre looking for an alternative, the best ones i know are zathura (very geeky but quality) and xpdf (very standard, but oldschool.)

as for gtk, the latest version is abysmal and i would be happy to never use it– screw wayland, honestly. you can use it. i love leafpad, i love icewm, so if you consider gtk and “gnome” to be the same thing, then those arrogant bastards were right.

incidentally, a qt port of leafpad would be awesome!

i find it utterly fascinating that stuff breaks when i remove the .service files on my "non-systemd" machine, though perhaps getting away from udev (or dbus?) will fix that.

meanwhile, atril (related to evince) refuses to work when i remove /var/lib/dbus/machine-id, and thats completely stupid. naturally, i will not use it. some people will say that /var/lib/dbus/machine-id is harmless, just like .service files are harmless, though i dont care– theres no justification for requiring it to read pdf files.

we are getting farther and farther from my original metaphor, though its more and more like people are deciding that theyre just not going to sell living room tiles and carpets anymore unless they have jelly included. so my choices are “jelly living room” or “no living room.” thats a huge step backwards for free software-- thats the new redix standard.

this situation is going to get dumber, before it gets better. but you should always be able to uninstall. those who do not agree, consider themselves your master.

reasonable dependencies aside, most free software is trivial to remove. and really, thats how it should be. people who sneer at this dont deserve your respect– or their users.

chapter 4: education-- the takeover

today we are measuring teachers and students more and more with standardised test scores, which is absolutely terrible in practically every way-- it produces students that are encouraged not to think, it encourages teachers not to teach, and it is an unfair and relatively baseless way to rate students, or teachers, or the schools they are from.

and yet we are doing it anyway.

meanwhile, silicon valley is encouraging schools around the world to adopt (and subsidise) a training program just for them.

this has produced a small war between silicon valley and teachers, where on one side you have corporations insisting that corporate training is better than school itself (and schools should be remade in their image) and on the other side you have schoolteachers (many though not all of whom are not as adept with technology as their students) who simultaneously live in a very suddenly 21st century society--

with facebook and smartphones and video chat and mass

surveillance-- but also think that schools dont need technology as much as they need to focus on the humanities.

maybe i need someone to better explain "the humanities" to me. to me, how we live in the 21st century is very relevant to how we live. you cant do an exceptional job of teaching students how we live and avoid the 21st century, because if you look around, computers are not about "job training" anymore--

they are a ubiquitous part of modern life, and whether youre teaching philosophy, sociology, history or even literature and law-- computers are part of all of that now. intending to neglect computers as a vital subject is a travesty.

they were less relevant for sure, in the 1970s. if you want to watch computers take over the world, you can probably find old episodes of "the computer programme" from bbc on youtube. on that show youll get to watch the 8-bit revolution as it was in progress, where they explain that computers are the future of banking. youll also witness a very shocking amount of transactions happening on and with paper.

corporations already own most of the message about computing outside of schools, and from this we can glean

that they are perhaps not the best stewards of modern

culture in our schools, either. but neither side of this fundamental disagreement about 21st century education can be allowed to win; we need a third option with the advantages of both the others.

i am pro-business, a bit left-libertarian (i am at the very least, a libertarian sympathiser) and probably fairly anti-corporate. if you asked me this: "should public schools be anti-corporate in their curriculum?" at first i would want to say "yes, absolutely!" the media is owned by corporations mostly, and is obviously very pro-corporation. schools should work against this, they should teach students to question the world around them.

but if i think about it a bit more, then for philosophical and not just practical reasons, i dont think schools should be entirely "anti"-corporate. theres probably an important lesson in this.

as i set out to write this section, i thought about what argument i would like to make for how pro-corporation a schools curriculum should or should not be. im well aware of my own bias in this, i wont even try to be neutral without a good reason to be neutral.

nonetheless, i immediately thought of religion in schools. the thing about schools is, students are forced to go to them. if you believe in religious freedom, if it is a tenet of the society you live in, then forcing students to learn religion in schools is arguably a very bad (at least unfair) idea. i have softened a bit on that stance over time, as it would be fantastic for people to learn more about each others cultures. though the more mandatory your classes are, the less religion you should probably be teaching.

also, even if schools produced a perfectly rounded religion class, im not sure how well (or fairly) it would be taught. so all ideals and advantages taken into account, i remain unsure that we should have religion in schools.

and yet, even as an agnostic i can almost understand the feelings of religious parents sending their kids to a public school in the united states. religion will be treated as a sort of disease to be quarantined, out of concerns about conflict; history will teach that religion is all about crusades and blood.

history is very important, and if you try to present the most thorough possible curriculum, it is difficult to get everything just right. i do believe that religion is about far more than crusades and blood, and i think most people do. i dont think that schools are designed by people that hate religion, or even that theyre really anti-religion, but i do think that perhaps we are designing curriculum that is a little overprotective.

how pro-corporate a school should be follows this line of

thought:

many religions are monopolistic; many corporations are monopolistic. to teach a monopolistic perspective in a school would severely limit the education of students, which should be quite a lot broader than that-- rather than yoke students to a monopoly.

when it comes to corporate monopolies though, rather than religious ones, anything goes. (just imagine apple and microsoft demanding "equal time" in classes. ha!) thus i think it would be far better if schools taught a perspective broader than that of their sponsors (and this is why i think academia is sort of kidding itself these days.)

since every monopoly wants-- well, a monopoly on the feelings of their prospective customers, anyone who is really teaching people to think should probably be fighting against this narrow perspective, the way they fight against religion in schools.

but, since schools are arguably overprotective against religious everything as it is, (even if there are good reasons to go in that direction, if perhaps to a slightly lesser degree) then maybe we dont need to be overprotective against corporate influence as well. we should certainly prevent corporations from having too much influence over the system-- monopolies take over systems entirely, if able.

those are my thoughts on the matter, as long as we dont cater to the absurd (and un-historically accurate) false dilemma between evolution and the bible, nor allow silicon valley to tell kids "everything about the future that we say is important happens to increase our bottom line; this is just a

coincidence" because if we allow that, weve failed to teach something.

even before the takeover thats in progress, i have noted that schools have not-so-cleverly adopted whatever plans for computer education the industry really wanted. im not convinced this is due to corruption instead of ineptitude; traditionally, most teachers do not understand technology as well as their students.

i am perhaps, surprisingly sympathetic to teachers about this fact, and i do think we owe it to ourselves (whether as teachers, students, parents or citizens) to find a way to fix this. even teachers who do understand technology better than their students (i had one, he flew around in wwii in a "spooky") would benefit from classes that are better designed in this regard. classes that fail the teachers also fail their students, so weve hardly designed the best ones yet.

note that modern computer programming languages were invented by a university math teacher, before assuming too much about educators.

chapter 5: free culture-- a slow start

as much as i think "open source" co-opts and sometimes even hurts free software, there is one thing it has arguably done a better job of supporting than free software has: the free culture movement.

free culture (as promoted by lawrence lessig) was inspired (according to lessig) by richard stallmans ideas about free software. drm has no real place in this movement.

rather than push for total freedom, creative commons was designed to allow artists to explore and express their own desire to have "some rights reserved" instead of all.

attribution, a requirement of most free software licenses, is required in all "cc" licenses except cc0.

commercial restrictions seem to be a good idea to some people, though there is no agreement on what this means and it leads to confusion about the users/remixers rights as much as it leads to progress. (note that commercial restrictions are not possible with free software.)

blocking derivative works would seem to go against the entire spirit of the free culture movement, which at least tries to restore the "right to remix".

despite the fact that free software luminaries like ben mako hill and fsf-founder richard stallman have tried to define free cultural works as having all of the same freedoms as free software, most of the non-software works related to the free software movement itself are not free culture works, but prohibit derivatives and remixes.

while stallman has argued that "fair use" should cover such rights, part of the reason that creative commons was founded was that fair use was failing sometimes, and cc licenses were a way to make related rights easier to share and access without legal teams.

1. so you have a free culture movement that first gives most people the ability to "race to the bottom" in terms of lowest-common-denominator sharing (by-nc-nd, non-commercial sharing only with no remixes)

2. you have a free software movement that argues for 4 freedoms for software and "free culture works" that also encourages people to make anything based on politics or opinion as non-free-culture works

3. you have snails-pace change from cc's inception in 2001 because free culture started with no solid direction, had very little solid direction, and still has very little solid direction: https://freeculture.org/Free_culture_priorities

4. when people push for a larger shift towards actual free culture, stallman litters the gnu and fsf pages with anti-free-culture propaganda. in short: the free software movement almost never uses free culture licenses and more often encourages people (by example) to not use them...

you would think that this very arbitrary "works of opinion" line (how many cultural works do not contain opinions?) at least leaves free software to promote free culture licenses for technical manuals, at least the free software foundation says "free software needs free documentation."

so the fsf promotes a free culture license for documentation, at least, right?

NO!

because the the gnu fdl is not a free culture license!

most of the problems that make the gfdl not a free culture license are covered in this story: https://www.linux.com/news/debian-decides-gnu-free-documentation-license

however, the worst aspect of the gnu fdl is that it prohibits bulk copying to people who do not have reliable network infrastructure. if you publish or distribute "opaque copies" numbering more than 100, you have to provide an electronic

version as well.

so while the free software movement initially seems like the broadest supporter of free culture, in practice it encourages people to release works under non-free culture licenses for "works of opinion," as well as for technical manuals, as well as for practically everything.

my opinion of the free culture movement itself is that:

1. first it outlines the importance of people being able to go where they want to freely.

2. then it shoots itself in the foot, reloads, and shoots the other foot.

3. then it straps grenades to both feet, pulls the pins, and blows its feet off.

4. then it amputates its feet and reiterates the importance of moving around freely.

ive never witnessed an organisation dedicated to freedom for everyone that spends a greater percentage of its time promoting the exceptions to its own solutions!

most creative commons licenses are not compatible with "freedom 0" of the free software definition.

freedom 0, in practice, simultaneously forbids you from using drm without supplying the keys to the users-- and also forbids you from forbidding (via software license) that drm be implemented.

since most cc licenses forbid drm, they do not offer freedom 0 and thus are not free software licenses. except cc0: a universal license (waiver) for offering public domain-like access to works, even in countries (such as finland) that do not allow authors to submit their own work to the public domain.

the point of creative commons licenses was to make it easier and less confusing to share your work as an author or artist.

and the free software foundation claims to support free culture while making it nearly impossible to gain any of the advantages of creative commons, free culture licenses, or even to understand the only license (actually a waiver) that is simultaneously compatible with the gpl, free software licensing, the free culture definition, and other cc licenses.

does the free software movement really support the free culture movement? i dont think it does. the "support" of free software towards creative commons is more like the support that open source has offered free culture: antagonistic, confusion-spreading, support in word only.

"open source" also has thrown a very large wrench into free culture by insisting that-- although cc0 is gpl compatible according to the gpl authors-- and the gpl itself is an "open source" license-- the cc0 is not an osi-approved license for software!

since the beginning, free culture was held far back from its own success due to its own lack of direction, bizarre though absurdly limited "faint praise" support from free software, and completely self-contradicting support from practically every organisation that touches it.

its a miracle if anybody understands the free culture movement at this point, and no wonder that in practical terms, it barely exists at all.

chapter 6: computing-- the alliance

25 years ago, it was so much easier to learn to code; and today it is more important than ever.

while silicon valley may have the worst reasons to teach you how to code, professional coding isnt even the best reason to learn. coding is the shortest route to having a fundamental understanding of computers– unlike abstract logic (which students have relatively little interest in) coding is an activity which can teach abstract ideas and apply them to something fun or useful instantly, all in the same lesson.

coding is computer literacy, and since the 1990s computing has shifted away from fundamentals and towards an entirely pre-packaged software ecosystem. yet you probably didnt learn how to read just so it would get you a job; you probably learned so that you could understand the world around you.

with every company trying to get you to put your photos, personal information, financial data and professional and personal contacts in “the cloud“, it would be a good idea for people to fully understand where their data is and why. with so many modern issues revolving around technology, we need a public with a better understanding of its pros, cons and fundamental workings.

we really cant afford to keep pushing the idea that things “just work,” and hide it all from the owner. we really cant afford to have tech giants in the western half of the country own more of our lives than we do.

if we understand that not only do things not “just work,” but how much that attitude threatens our security and everything we do with computers, its time that we made more educational tools that actually help people understand how their stuff works.

and more than hardware (especially thanks to “the cloud”,) the one thing that really runs through every aspect of this modern lifestyle is software. no, you dont have to be able to write your own text editor or web browser. but all computer code has more in common than it has in its differences.

these common threads through all code and software are worth teaching to everyone; not so they can get a job in it, but so they can stay informed about the world through their own means, and not just at the mercy of tech giants.

using the right tools, we can make increasingly complex, fundamental computing tasks simple again; not just by providing a friendly layer on top of everything else that will be abandoned in 5 years, but by helping you get as far into the workings of your own computing as youd like to go.

this is part of a very large and mostly unorganised effort to make certain your computing– and the life you live augmented by that computing– stays your own. it matters less whether you code in javascript or python. it matters less whether you run bsd or one of our distros. it even matters less that “i can choose to be less free.”

before you can make the choices that make your computing yours however, it is necessary to learn more about your choices. until then, its just different ecosystems arguing which “choice” is best from their own perspective.

a society with even a notable amount of freedom can no longer afford for those choices to be the only kind available.

chapter 7: education-- the alliance

in the 80s and early 90s, education was still on its way to determining the direction it would go with computers for the next few decades-- and that direction, which started out with fairly rounded and fundamental lessons about computing, quickly transformed into application training.

application training is arguably useful, though i would say that in a world with tens of thousands of mcdonalds locations, a class about how to manage a restaurant franchise is probably of greater use to students than a class about what to do as a person working a drive-thru window. and yet! computer education focuses on application training.

this i believe, is the greatest reason that people call themselves "computer illiterate" as cheerfully as if it were on par with beating cancer. and though we are doing it for the wrong reasons, i definitely believe that giving kids single-board computers and teaching them to code is not just a good idea-- it is the fastest and most direct (and most fun) route to computer literacy.

i want to say that i doubt most teachers even get how much fun it is to learn this stuff. to paraphrase a speech from daniel quinn, a teacher may have cause to feel vindicated if a student doesnt enjoy learning to code for the first time.

"there, you see? i hate computers and you should too! i only use them for email and youtube..." but there are always a few who "get it" and im not picking on them, i probably had every kind of teacher in the schools i went to.

the danger of letting a monopoly determine how to teach these things is just as severe, even if we agree that coding should be taught to every student. in 2016, i met a university

student who was taking bioinformatics. she had a laptop running ubuntu, which i was excited about because it guaranteed she also had python installed. i asked her if she ever learned about coding.

"oh, coding, i hate that" she told me.

"no, i suppose i dont really hate coding, just my teacher."

teaching is very often our one opportunity to produce a student that either loves or hates a particular subject. we have a nation full of people very proud to be computer illiterate, even when they use computers (at work and at home, and outside their home) all day long.

and we have a very fast way to teach real computer literacy, which we have largely abandoned for multiple decades of application training, which teachers largely dont get and which students really hate, until we change the approach.

im not sure i believe in this "computational thinking" craze. mostly i believe it could be a way to try to teach coding with a built-in aversion to coding. im not sure any curriculum ought to be invented specifically to avoid the very subject it teaches, but as long as a school doesnt avoid coding first then perhaps "computational thinking" has a place. if you teach it first, youve only extended the first introduction to coding and made it more tedious for everybody. not great.

a better approach to teaching coding more easily would be to make coding (actual coding, not the "thing-which-must-not-be-named" approach) easier to teach. as tautological as that sounds, its a strong recommendation in the midst of yet more-illogical alternatives.

the concept of making coding easier to teach is hardly new. in the 60s through the 80s, it was via basic and logo. in this century, it is python and logo derivatives. i love logo, particularly the style of the language itself, when it is used for drawing and nearly devoid of punctuation in syntax.

to take this to the next step and have drag-and-drop coding is clever in many ways, and fine for younger students. if you wish to convince me that the future will have more drag-and-drop coding even for professional applications, i am not all sceptical; you could be right. on whether it is really necessary (or even better) to start everyone with drag-and-drop coding however, i am deeply sceptical.

i would recommend looking at all the educational languages (and other languages popular in education) over the past 50 years, and try to work out the common threads. lower in syntax is a real winner, whether you personally believe in lower syntax or not. logo traditionally has very few parameters as well, and basic had far fewer parameters than many modern languages today. modern languages are more likely to require an understanding of libraries, but at least python makes loading a library extremely simple:

import libraryname

# no syntax in previous line

libraryname.command(parameters)

this is pretty standard, simple stuff. do you just want to import a single command?

from libraryname import command

command(parameters)

# same as built-in commands, syntactically

if youre thinking "i still dont like the look of that," good. then perhaps you shouldnt teach python first. i tried, but my intended audience was people who werent taking a class, and who hadnt already taught themselves to code. python is a little too much like math for some people.

if you think thats nitpicking, you should read more about the history of your favourite computer language. commands that

are similar to english words come from commands that are made out of english words, which come from university math professor and compiler pioneer grace hopper.

hopper wanted to design languages to get tasks completed in a professional setting, not to be mathematically pure. she knew that business people would prefer commands to symbols, but people told her that a command-based compiler would never work.

today we have commands, and a bit of punctuation may help us to organise them in the many lines of source, but the reason we have commands is that people generally dont like symbols!

i believe we should continue to design languages for education. other people think we should train people to code for industry; but doing things for industry has already resulted in a proudly computer illiterate society that hates the subject and possibly even invents entire educational subjects just to

avoid coding directly. yet in the 80s you could tell people to

print "hello, world!"

and it wasnt terrible at all. we are getting back towards that, but i feel one thing thats very good for any organisation dedicated to free software and other free media would be to produce a more computer literate society and even a society that is not afraid of the entire concept of coding.

it is not vital to teach first-time students industry-standard programming languages; most of the coding that students do when they first learn to code has nothing to do with industry standards, even when the language used is one popular with industry.

yet in tutorial after tutorial, intro after intro, youll hear it repeated that teaching coding is about the concepts, not a specific language. "which language should i learn first?" "its really about the concepts."

when i designed my educational language, i kept it down to 7 concepts so that you could learn from a simple checklist:

1. variables

2. input

3. output

4. basic math

5. loops

6. conditionals

7. functions

i can even explain these concepts while avoiding code, using things such as icons or filenames for variable names. but dont settle for that alone-- teach people real working code! only then will they get what computing is really about.

as to what language to use, i did a few experiments to teach coding to people using their own programming language. for people who dont know how to code, this made it much easier. for people who had already learned to code, this helped make them into better coders.

we should help more educators who are not developers, work with developers who are maybe not educators, to develop better educational languages together. this is a bit like herding cats, and also herding dogs together with cats, and yet that can be done for long enough to produce new languages.

but do we really need more languages? according to brown university professor shriram krishnamurthi, programming languages are the sort of thing you could write inadvertently. given this potential hazard, it is recommended that instead you do so on purpose.

if you would like to create your own programming language, i encourage you to do so. you dont actually need a computer for this. what you need to do is think of a few things that a computer does. a "code atlas" is a useful list for this purpose, being a (perhaps lengthy) list of things a computer can do, probably sorted by the type of function it performs.

the student selects 5 or more items from this list, or writes their own list of 5 or more items, and they also come up

with command names for each task. then they describe what sort of information each command would need to perform its task. this would probably get to the command via parameters.

at this point, the instructor may optionally produce a small language for the student to write code for. i have done this with a few people, and taught rudimentary coding and rudimentary language design in the same lesson. but the understanding about coding that can come from this seemingly cart-before-the-horse idea (a concept very popular in supermarkets, if you ever noticed) is significant.

if you dont have that sort of free time (or class time either,) then you could simply create your own language for teaching, or help some developer design a language that is better for teachers and students. if the inventor of the compiler can come up with an idea this user-friendly, if logo can be made as relevant as it is in the 21st century, by students at m.i.t. no less, then i hardly think this is a crazy idea.

meanwhile i am surprised that the free software foundation thinks that lobbying for better education is going to be more effective than developing better education. they didnt lobby for congress to create a better operating system, they did it themselves. a good alternative or complement to the fsf would probably do more around education. its just logical.

chapter 8: culture-- the alliance

for years i hoped there would be a free culture foundation, similar to the free software foundation but working towards free culture. the closest thing was students for free culture, which im happy to say has renamed themselves to the free culture foundation. theyre the best people for that name.

the internet is the largest library that mankind has ever built, and ebooks are not bad in and of themselves. if you want to publish a story or instructions for something, the internet can make your book go potentially anywhere, very quickly, in a way that wasnt even dreamed of by most people as recently as 30 years ago.

stallmans "the right to read" has come too close to being a true story, but at the time he wrote it amazon did not offer an ebook platform yet. the worst aspect of drm i could think of was on books, but that wasnt yet a real concern-- until amazon decided that it would be an ebook company.

i was a happy customer until then, and i used to say "when they have drm on books, then drm will be truly evil." amazon proves this. they give you encrypted books and they control the keys-- this means if youre a library, the publishers (rather than the librarians) control the collection. this is worse than backwards. the company that sells you your books should have no control over that book at all after purchase. in the united states, first sale doctrine allows that you should decide what happens to the book next-- drm prevents that, stripping you of legal rights with technical means.

the dmca and other laws like it provide a catch-22, making it illegal to have your legal rights by circumventing any technical measure that tries to restrict them. this is worse than backwards.

the person in charge of creating exceptions to this aspect of the dmca is the librarian of congress.

but if libraries of the future are controlled by publishers, publishers are not sympathetic to libraries at all. libraries drive book sales, despite offering everyone the ability to read for free. publishers dont seem to care about sales as much as they care about control. year after year, drm proves this. it punishes honest customers first, and may get around to punishing people who copy things illegally later.

a surprising number of people do not know about the free culture movement-- at least it is surprising if you didnt read chapter 5. the free culture foundation is young, but is putting itself together very slowly. they are very narrow in what they want to add to their collection-- beyond wanting only freely licensed works. apart from pushing the free culture foundation to broaden their scope, the alliance can build its own libraries-- libraries of works that are free cultural works and also free software works-- libraries that actually host files and also collections of links to free cultural and free software files.

obviously the internet is such a library, but it is not organised as well as one. you can look for things with a search engine that is probably not as honest as a card catalogue, one that disrupts your search by design with the data it wants you to have. not all of that can be prevented, but it can be mitigated if enough people care about building a library. the internet archive is an incredible resource we should make use of, but we can make use of other resources in addition to the archive.

because of how free software and free culture work in practice, it is possible to build and maintain something until someone else picks up the job. collaboration among strangers is possible, sometimes without coordination. it defies conventional reason, but happens often.

when someone remixes or shares your work, you usually dont know who it will be or what it will be like. but if the internet is any indicator, thats how it is anyway. free culture is a movement that gives permission to what people are likely to do anyway, or want to do.

there are other organisations related to these concepts, but i have yet to watch most of them make these things happen. when monopolies like facebook can grow at disturbing rates, we need free libraries that dont take 20 years to gain a collection.

free software and free culture are two concepts that go together incredibly well, but are not being harnessed. theres lots being deliberately glossed over with this statement-- im aware of enough software works designed to assist in sharing things socially to fill up the next page or more, and ive watched them over the years. i could probably point out faults in each one, but like with distros, theres only so much you can do once an organisation is in the way.

the best route to software that anybody can use, is software anybody can change. so much ridiculous sophistication goes into making these applications foolproof, and the result is that only sophisticated people can do anything to change them.

thats really not how the web was designed. sure, its how ted nelsons xanadu was proposed, but the web got where it was by being relatively simple and feature-incomplete. what the best scientists and technicians and best coders can do is give us tools to improve things, but until we have our tools that a lot more of us understand, those other tools are only going to be used by a few people. if you think im joking, look at how few people actually use them. do you run php? do you run a server? have you ever run a gnu social instance?

dont get me wrong, as a technology gnu social is very positive and as a way to host files, very cool. silicon valley is always looking for overly sophisticated ways to create "simple" and "easy to use" solutions, that are at least complex enough to require support.

im not going to write another diatribe about abstractions vs actual simplicity, go check out andy menders web blog for that (and ask him to release his work under a free culture license. which he will likely say no to, but be polite.)

if we can get more people writing the simplest of code, we can have a more free, more literate society that doesnt need overly abstracted solutions to simple problems that we could have solved a decade ago, if we werent creating an industry out of complicating everything.

ive followed computers for decades. no matter how complicated everybody tries to make things for me, i can usually find a simpler way, provided that im free to do what i want.

thats the free software and free culture for everyone, not just for "hackers" or experts or admins or "developers, developers, developers, developers" but you and me.

and the girl taking bioinformatics, and the guy who used like ubuntu before they became "humanity to corporations" and the advocate who wonders exactly why free culture cant work exactly like free software.

theres no real reason it cant. so we wont make those excuses, we will just try to find more ways to make it happen.

figosdev, july 2018

home: [lit]https://freemedia.neocities.org[lit]