Saturday, December 17, 2022

ANS -- ‘Males Are Naturally More Promiscuous Than Choosy Females’ The junk science story from the 1940s that just won’t go away

This shows us how powerful our biases can be.  Why did we think a study on fruit flies applied to us?
--Kim


'Males Are Naturally More Promiscuous Than Choosy Females'

The junk science story from the 1940s that just won't go away

Licensed from Adobe Stock

One of the "everyone knows" statements that I come across most often is the one saying that men are just naturally more promiscuous than women. They are supposedly biologically designed to spread their seed with an evolutionarily sound mating strategy and this explains human mating behaviors — for all humans.

Besides the fact that beliefs that come from "everyone knows" are nearly always mostly or totally wrong, these particular ones about men being naturally more promiscuous as an evolutionary mating strategy come out of a highly problematic scientific experiment done in 1948 — on fruit flies. Nonetheless, the Bateman principle, as it came to be known, has almost taken on a life of its own over the years and is still an oft-cited and much referenced bit of research.

The fact that Bateman's research has never had a strict repetition ought to be concerning considering that replicability is one of the hallmarks of scientific validity, but it doesn't seem to matter.

Bateman's experimental study of Drosophila melanogaster produced conclusions that are now part of the bedrock premises of modern sexual selection. Today it is the most cited experimental study in sexual selection, and famous as the first experimental demonstration of sex differences in the relationship between number of mates and relative reproductive success. We repeated the experimental methodology of the original to evaluate its reliability. The results indicate that Bateman's methodology of visible mutations to assign parentage and reproductive success to subject adults is significantly biased. Source

The above study, one of many that has reached the same conclusions, doesn't seem to have made much of a dent in the strong belief in Bateman's findings. This cultural narrative continues nearly three quarters of a century later for the same reason that it was embraced in the first place — because it creates story that is appealing to some people and reinforces patriarchal norms. The fact that Bateman's research has never had a strict repetition ought to be concerning considering that replicability is one of the hallmarks of scientific validity, but it doesn't seem to matter.

When Patricia Adair Gowaty's team took on trying to replicate Bateman's study in 2012 they noted serious flaws in his methodology which resulted in biased results.

Bateman's study was taken as a scientific confirmation of Darwin's theory because it fit a popular, albeit rather Victorian narrative.

"Here was a classic paper that has been read by legions of graduate students, any one of whom is competent enough to see this error," Gowaty said. "Bateman's results were believed so wholeheartedly that the paper characterized what is and isn't worth investigating in the biology of female behavior."

"Our worldviews constrain our imaginations," Gowaty said. "For some people, Bateman's result was so comforting that it wasn't worth challenging. I think people just accepted it." Source

Darwin was probably the first to publicly opine that across species, males were randy and promiscuous and that females were reticent and choosy. Bateman's study was taken as a scientific confirmation of that theory because it fit a popular, albeit rather Victorian narrative. But what evolutionary biologists have discovered in the past several decades is that this isn't necessarily so.

Aside from the fact that many primate females are far from sexually reticent or choosy, across all species, mating with several males confers a significant evolutionary benefit.

Gowaty describes the benefits of multiple mates as an answer to the never-ending evolutionary struggle against what may be the world's greatest predator: disease.

In this illness-driven arms race, organisms that produce offspring from multiple mates are more likely to produce some children with the right antibodies to survive the next generation of viruses, bacteria and parasites. Source

Both male and female Barbary macaques mate repeatedly in rapid succession. Female chimpanzees are often sexually aggressive, and when primatologist Sarah Hrdy first described the ribald antics of female hanuman langurs she had witnessed in her field research to the scientific community of the 1970s, it created a scandal.

More than 30 years of subsequent research has confirmed Hrdy's findings and expanded on them to reveal that females in many primate species, humans included, engage in a diversity of sexual strategies to enhance their overall reproductive success. For example, in saddle-backed tamarins, females will solicit sex from multiple males who will each help to care for her offspring.* Female mouse lemurs will mate with up to seven males during a single night. Capuchin monkeys will seek out mating opportunities in the early stages of their pregnancy, presumably to confuse males about paternity. And bonobo females will have sex with everybody at pretty much any time they feel like it. Source

In addition to primates, in many bird and several insect species females mate with more than one partner. For example, an estimated 90% of bird species are socially monogamous but scientists have begun to discover using genetic and behavioral studies that in some species up to 75% of the offspring may come from "extra-pair copulations." In other words, mama bird had some other sexual partners other than just the one she was tending the nest with.

Cooperative polyandry occurs among The Galapagos hawk, as females will mate with up to seven different males during the mating season. Throughout the nesting period, the female and her multiple male partners take turns incubating the eggs and feeding the offspring. Wattled jacana females display resource defense polyandry, mating with multiple males on her territory and laying clutches of eggs within intervals of less than two weeks. Source

Aside from the fact that Bateman's research methodology was terrible, why on earth would any reasonable person generalize the behavior of fruit flies to humans? Why did generation after generation of scientists overlook the blatant flaws in both his research and his conclusions? Despite all of this, what we know now is that Bateman had it pretty much all wrong.

Females can gain benefits, such as reduced infanticide risk or assurance of fertilization, from mating with multiple males (polyandry). Similarly, the assumption that males will always exhibit indiscriminate mating has also been challenged by comparative studies, particularly in insects, in which the energetic costs of sperm production, courtship and copulation can select for male choosiness and the prudent allocation of mating effort. Source

It always makes good sense to look at the larger picture, to consider multiple angles for any conclusion, and to be skeptical of a research study that cannot be replicated using the same methodology. But unfortunately, too often, even within the scientific world, narratives that seem right because they meet cultural expectations can be embraced as truth.

People who bother to look and pay attention have known for a long time that Bateman's conclusions were wrong, but most people don't want to pay attention beyond "everyone knows" sorts of stories that confirm their existing beliefs and biases. As one group of researchers noted in the conclusion of their look into Bateman's work,

Our field might profitably do some soul-searching: Why were Bateman's obvious errors overlooked for so long? As we said in our primary report, legions of graduate students have for the past 40 years read and discussed Bateman. Why did they not bring attention to the errors? Surely all of them, among biologists at least, understand the elements of mutation, inheritance and Mendelian genetics. Why did their professors not challenge Bateman's results? We are inclined to the idea that Bateman's results and conclusions are so similar to status quo, dominating world-views (competitive males, dependent females) that pre-existing cultural biases of readers may have dampened skepticism and objectivity. Perhaps lack of repetition is simply due to lack of professional incentives such as funding for repetitions. (Although that doesn't explain why so few people ever pointed to the glaring errors in methodology — comment mineSource

As another group of researchers noted, "We argue that human mating strategies are unlikely to conform to a single universal pattern." Unlike animals, we have to also factor in social norms, which may vary slightly from culture to culture and from era to era. In some places in the world even now, partible paternity, where several men have sex with a woman and are considered the father of her child is a long-standing practice. It spreads fatherly feelings throughout the group which helps to maintain solidarity and cohesion as well as promoting the well-being of a greater number of children. And of course, in a lot of places, strict monogamy (at least institutionally, if not in actual practice) is the norm.

Deciding that patriarchal mores are supported by science (when they aren't) is really just another way for this social system to try to justify itself. But for a long time it's worked. A lot of people still believe this stuff and pass it along as gospel truth because it makes sense to them in a patriarchal context — because that is mostly what they've known — but that doesn't mean that it is actually true. "Everyone knows" is cultural currency but it isn't scientific fact. For me, I'd rather know the truth.

© Copyright Elle Beau 2022


ANS -- On Using AI to write your assignment...

This is from Facebook.  It's an upcoming set of moral dilemmas.  What do you think people should do about this?
--Kim


Today, I turned in the first plagiarist I've caught using A.I. software to write her work, and I thought some people might be curious about the details.
The student used ChatGPT (https://chat.openai.com/chat), an advanced chatbot that produces human-like responses to user-generated prompts. Such prompts might range from "Explain the Krebs cycle" to (as in my case) "Write 500 words on Hume and the paradox of horror."
This technology is about 3 weeks old.
ChatGPT responds in seconds with a response that looks like it was written by a human—moreover, a human with a good sense of grammar and an understanding of how essays should be structured. In my case, the first indicator that I was dealing with A.I. is that, despite the syntactic coherence of the essay, it made no sense. The essay confidently and thoroughly described Hume's views on the paradox of horror in a way that were thoroughly wrong. It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullshitting after that. To someone who didn't know what Hume would say about the paradox, it was perfectly readable—even compelling. To someone familiar with the material, it raised any number of flags. ChatGPT also sucks at citing, another flag. This is good news for upper-level courses in philosophy, where the material is pretty complex and obscure. But for freshman-level classes (to say nothing of assignments in other disciplines, where one might be asked to explain the dominant themes of Moby Dick, or the causes of the war in Ukraine—both prompts I tested), this is a game-changer.
ChatGPT uses a neural network, a kind of artificial intelligence that is trained on a large set of data so that it can do exactly what ChatGPT is doing. The software essentially reprograms and reprograms itself until the testers are satisfied. However, as a result, the "programmers" won't really know what's going on inside it: the neural network takes in a whole mess of data, where it's added to a soup, with data points connected in any number of ways. The more it trains, the better it gets. Essentially, ChatGPT is learning, and ChatGPT is an infant. In a month, it will be smarter.
Happily, the same team who developed ChatGPT also developed a GPT Detector (https://huggingface.co/openai-detector/), which uses the same methods that ChatGPT uses to produce responses to analyze text to determine the likelihood that it was produced using GPT technology. Happily, I knew about the GPT Detector and used it to analyze samples of the student's essay, and compared it with other student responses to the same essay prompt. The Detector spits out a likelihood that the text is "Fake" or "Real". Any random chunk of the student's essay came back around 99.9% Fake, versus any random chunk of any other student's writing, which would come around 99.9% Real. This gave me some confidence in my hypothesis. The problem is that, unlike plagiarism detecting software like TurnItIn, the GPT Detector can't point at something on the Internet that one might use to independently verify plagiarism. The first problem is that ChatGPT doesn't search the Internet—if the data isn't in its training data, it has no access to it. The second problem is that what ChatGPT uses is the soup of data in its neural network, and there's no way to check how it produces its answers. Again: its "programmers" don't know how it comes up with any given response. As such, it's hard to treat the "99.9% Fake" determination of the GPT Detector as definitive: there's no way to know how it came up with that result.
For the moment, there are some saving graces. Although every time you prompt ChatGPT, it will give at least a slightly different answer, I've noticed some consistencies in how it structures essays. In future, that will be enough to raise further flags for me. But, again, ChatGPT is still learning, so it may well get better. Remember: it's about 3 weeks old, and it's designed to learn.
Administrations are going to have to develop standards for dealing with these kinds of cases, and they're going to have to do it FAST. In my case, the student admitted to using ChatGPT, but if she hadn't, I can't say whether all of this would have been enough evidence. This is too new. But it's going to catch on. It would have taken my student about 5 minutes to write this essay using ChatGPT. Expect a flood, people, not a trickle. In future, I expect I'm going to institute a policy stating that if I believe material submitted by a student was produced by A.I., I will throw it out and give the student an impromptu oral exam on the same material. Until my school develops some standard for dealing with this sort of thing, it's the only path I can think of.

Friday, December 16, 2022

Fwd: Tidbit 12/17/22



---------- Forwarded message ---------
From: Joyce Segal <joyceck10@gmail.com>
Date: Thu, Dec 15, 2022 at 12:45 AM
Subject: Tidbit 12/17/22
To: Kim Cooper <kimc0240@gmail.com>


The Bureau of Labor Statistics reported yesterday that inflation continued to slow its roll in November, as US consumer prices rose just 0.1% from the previous month. Compared to a year ago, prices were up 7.1% in November, which is actually the fifth consecutive month of declining annual inflation.

The inflation figures were lower than economists predicted, furthering the relatively cheery mood ahead of today's Federal Reserve interest rate announcement. Plateauing consumer prices are a signal to the Fed that it can start to chill with historically massive rate hikes that have been slowing the economy. And it plans to: Chair Jerome Powell is expected to increase interest rates by 50 basis points today, lower than the 75 bps increases announced at the previous four meetings.

--
Joyce Cooper
CEO SunSmartPower
650-430-6243
SunSmartPower.com

Monday, December 12, 2022

ANS -- Unapologetic Black Power in the South

I found this on Facebook.  It's an opinion about what just happened in Georgia with the election.  
--Kim




Unapologetic Black Power in the South
Dec. 11, 2022, 3:00 p.m. ET
4 MIN READ
Senator Raphael Warnock stands at a podium facing an audience. A person in the foreground points a finger in the air.
Credit...Damon Winter/The New York Times

Charles M. Blow
By Charles M. Blow
Opinion Columnist

I'm a strong advocate of Black reverse migration — Black people returning to Southern states from cities in the North and West in order to concentrate political power.
This reverse migration was already happening before my advocacy, and it continues. As the demographer William H. Frey wrote for the Brookings Institution in September, the reversal "began as a trickle in the 1970s, increased in the 1990s, and turned into a virtual evacuation from many Northern areas in subsequent decades."
There are many reasons for this reversal, primarily economic, but I specifically propose adding the accrual of political power — statewide political power — to the mix.
One of the ways that people often push back on what I'm proposing is to worry aloud about the opposition and backlash to a rising Black population and power base in Southern states.

Well, Georgia is providing a proving ground for this debate in real life.
I heard so many people after the Georgia runoff in which Raphael Warnock defeated Herschel Walker who said some version of "Yes, but it was still too close."
It seemed to me that those comments — and many others — missed the bigger point: Something absolutely historic is happening in Georgia that portends a massive political realignment for several Southern states.
Georgia voters proved this year that the historic election of a Black senator from a Southern state by a coalition led in many ways by Black people was not a fluke.
And that coalition sent Warnock back to the Senate in the face of fierce opposition. Not only did the Georgia state legislature and Gov. Brian Kemp do their best to suppress voters — a tactic almost always designed to marginalize nonwhite voters — but Republicans also turned out in droves to try to retain power that they see slipping from their grasp.
Furthermore, in the general election, Black turnout was down. According to Nate Cohn, the Black share of the electorate fell to its lowest level since 2006.

But then in the runoff, when the choice was narrowed and sharpened, the Warnock coalition bounced back, stronger and defiant.
According to the Georgia secretary of state's office, Black voters only account for 29 percent of registered active voters in the state. During early voting, Black voters outperformed. They went to the polls to prove a point. They voted to flex. According to a Pew Research Center report, the number of Black people registered to vote in Georgia increased 25 percent from 2016 to 2020, a far larger increase than any other racial group.
Yes, many, like me, were offended by the presence of Walker as the alternative, and were voting as much to defy Walker as to affirm Warnock.
But even there, I think we have to step back, take a breath, and soberly assess how historic his presence was. The power structure in Georgia was so shocked by what this Black-led coalition had done that they allowed Donald Trump to foist a thoroughly unqualified Black Republican on them, thinking that he would help them win back power.
Georgia Republicans thought they could fracture the Black vote. They couldn't. It held strong and united.
There is a great, nearly inexpressible exhilaration in this realization as a Black citizen and voter. Black people and other minorities weren't simply being called upon to tip the balance when white voters split down the middle. Every other Black senator in American history has been elected by a coalition led by white liberals. Warnock is the first elected by a coalition led by Black people.
Black people were leading the charge in his election, and he was solid, bright and competent. This startling new reality of electoral politics demolished any lingering lies about inferior Black leadership or intemperate Black voters. Black voters want what any other voter should want: solid leaders who are responsive to them.

Some may look at the defeat of Stacey Abrams in the governor's race and see it as a sign of caution, that the "Old South" is alive and well. But I see it differently. Power will not be passively relinquished. Those with it will fight like hell to retain it. And in that power struggle, they will win some of the battles.
Each election will depend on candidates and campaigns. The race between Kemp and Abrams is not a predictor of what is possible. Black voters in Georgia keep reminding themselves what's possible when they focus their attention and effort as they did in this runoff.
That kind of engagement — and the reward of winning — is psychologically powerful. Once a people taste power, state power, it seems to me that it will be hard to turn away from it. Having it begins to feel normal and expected.
That is a reality that many in this country have feared for centuries. That is a reality that I now relish.

Thursday, December 01, 2022

ANS -- Is the Greatest Threat to Humanity Something Called an Algorithm?

Here's an intro, from FB (from Sara Robinson), to an article by Thom Hartmann.  


Let's put a moral frame around this. Humans utterly rely on good information for our survival -- which is why every culture on the planet has strong taboos against lying.
Social media algorithms don't include these taboos. And this fact may threaten our very survival.
Thoughty stuff from Thom Hartmann.


Is the Greatest Threat to Humanity Something Called an Algorithm?

Algorithms used in social media are not tuned for what is best for society. They don't ask themselves, "Is this true?" or "Will this information help or hurt humanity?"

Image by Janko Ferlic from Pixabay

Share

The man who coined the term "virtual reality" and helped create Web 2.0, Jaron Lanier, recently told a reporter for The Guardian there's an aspect to the internet that could endanger the literal survival of humanity as a species. It's an amazing story, and I believe he's 100% right.

Humans are fragile creatures. We don't have fangs or claws to protect ourselves from other animals that might want to eat us. We don't have fur or a pelt to protect us from the elements.

What we do have, however, that has allowed us to conquer the planet and survive for eons is our interconnection with each other, something we generally refer to as society, community, and culture.

Humans are social animals. Our ability to share information with each other in ways that are meaningful and credible has been the key to our survival.

For hundreds of thousands of years, it was scouts, neighbors, and family members reporting predators or prey, animal or human, just around the other side of the mountain or on the perimeter of the nighttime fire, that kept our ancestors safe.

Over the millennia, we developed elaborate social constructs or "rules of society" to enhance our confidence in the information we're getting from our fellow humans, because that information may be essential to our survival.

When important information is twisted, distorted, or lied about it can put us at risk. And that's what's happening right now across multiple social media platforms, causing people to question global warming and other science (Covid vaccines, for example) while engaging in behavior destructive to a democratic, peaceful, functioning world.

These rules or Commandments about truthful communication are at the core of every religion, every culture, and every society from the most technologically sophisticated to those "primitives" still living in jungles, forests, and wild mountain areas.

They're built into our deepest and most ancient oral traditions, stretching back to dim antiquity, known by every person in every culture around the world.

We in western culture can all recite the story of The Little Boy Who Cried Wolf, Eve's lie to her god about consuming the forbidden fruit, and the consequences of courtiers' lies about The Emperor's New Clothes. Every other culture on Earth has their versions of the same stories.

We know, remember, and pass along these stories because truthful information is essential to the survival of family, tribe, community, nation, and ultimately humanity itself.

They're even built into the language of our religions. Discussing this article I was then writing in first draft, my dear friend Rabbi Hillel Zeitlin was telling me on a Zoom call yesterday how the Torah calls the inanimate mineral world Domem or "silent" while the realm of humans is known as Midaber or "speaking."  

"When the Torah describes the infusion of the soul into man," he said, "one of the most ancient commentators, Onkelos, described it as 'the speaking spirit' or Ruach Memalela."

And that "speaking spirit" — our ability to communicate with others — carries with it an obligation to tell the truth: the Bible is filled with stories of disasters that came about because of untrue information (as are the Koran, Bhagvadgita, and holy books of every other religion) .

This explains why:

—We universally disparage lies and liars: it's often the first lesson parents teach their young children.

—When information is particularly critical to our survival or quality of life, we build into law severe penalties for lying (called "perjury").

—We honor people who have been particularly effective at finding important truthful information and sharing it with our highest honors, things like Nobel and Pulitzer prizes.

—We built protection for a free, open, and accountable press into our Constitution 231 years ago so future generations of Americans could rely on competent and full-spectrum information when making decisions about leadership, governance, and policy.

Now all of that — based in our ability to trust in the accuracy of information we use to select leaders and determine policy — is under threat from something that's invisible to us and most people don't even realize exists.

Possibly the greatest threat to humanity at this moment is something called an algorithm.

An algorithm is a software program/system that inserts itself between humans as we attempt to communicate with each other. It decides which communications are important and which are not, which communications will be shared and which will not.

As a result, in a nation where 48% of citizens get much or most of their news from social media, the algorithm driving social media sites ultimately decides which direction society will move as a result of the shared information it encourages or suppresses across society.

When you log onto social media and read your "feed," you're not seeing (in most cases) what was most recently posted by the people you "follow." While some of that's there, the algorithm also feeds you other posts it thinks you'll like based on your past behavior, so as to increase your "engagement," aka the amount of time you spend on the site and thus the number of advertisements you will view.

As a result, your attention is continually tweaked, led, and fine-tuned to reflect the goal of the algorithm's programmers. Click on a post about voting and the algorithm then leads you to election denial, from there to climate denial, from there to Qanon.

Next stop, radicalization or paralysis. But at least you stayed along for the ride and viewed a lot of ads in the process": that's the goal of the algorithm.

Algorithms used in social media are not tuned for what is best for society. They don't follow the rules that hundreds of thousands of years of human evolution have built into our cultures, religions, and political systems.

They don't ask themselves, "Is this true?" or "Will this information help or hurt this individual or humanity?"

Instead, the algorithms' sole purpose is to make more money for the billionaires who own the social media platform.

If telling you that, as Donald Trump recently said, climate change "may affect us in 300 years" makes for more engagement (and more profit for the social media site) than does telling the truth about fossil fuels, it will get pushed into more and more minds.

No matter that such lies literally threaten human society short-term and possibly the survival of the human race long-term.  

As Jaron Lanier told The Guardian:

"People survive by passing information between themselves. We're putting that fundamental quality of humanness through a process with an inherent incentive for corruption and degradation. The fundamental drama of this period is whether we can figure out how to survive properly with those elements or not."

Speaking of climate change and information/disinformation being spread by algorithms on social media, he added:

"I still think extinction is on the table as an outcome. Not necessarily, but it's a fundamental drama."

Climate change is a unique threat to humanity, one like we've never seen before. It's going to take massive work and investment to avoid disaster, and that's going to require a broad consensus across society about the gravity of the situation. The same could be said about threats to American democracy like the rise of far-right hate and election denial.

Yet social media is filled with content denying climate change and denigrating basic norms and institutions of democracy. This is a threat to America and to humanity itself.

The premise of several books, most famously Shoshana Zuboff's The Age of Surveillance Capitalism, is that the collection of massive amounts of data about each of us — then massaged and used by "automated" algorithms to increase our engagement — is actually a high-tech form of old fashioned but extremely effective thought control.

She argues that these companies are "intervening in our experience to shape our behavior in ways that favor surveillance capitalists' commercial outcomes. New automated protocols are designed to influence and modify human behavior at scale as the means of production is subordinated to a new and more complex means of behavior modification." (Emphasis hers.)

She notes that "only a few decades ago US society denounced mass behavior-modification techniques as unacceptable threats to individual autonomy and the democratic order." Today, however, "the same practices meet little resistance or even discussion as they are routinely and pervasively deployed" to meet the financial goals of those engaging in surveillance capitalism.

This is such a powerful system for modifying our perspectives and behaviors, she argues, that it intervenes in or interferes with our "elemental right to the future tense, which accounts for the individual's ability to imagine, intend, promise, and construct a future." (Emphasis hers.) 

So, what do we do about this?

When our Constitution was written, the Framers wanted "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."

Thus, Article 1, Section 8 of the Constitution gives Congress the power to pass laws protecting both physical and intellectual property, things like inventions as well as creative writing and art. We call these regulations patent, copyright, and trademark laws.

Social media companies have claimed that their algorithms are intellectual properties, inventions, and trade secrets, all things that fall under the rubric of these laws to advance and protect intellectual property and commerce.

And, indeed, the whole point of algorithms is to enhance commerce: to make more money for the social media sites that deploy them.

But are they promoting "the Progress of Science and the useful Arts"?  Is amplifying hate and misinformation "useful"?

If not, the power to keep algorithms secret that Congress has given, Congress can also take away.

In my book The Hidden History of Big Brother: How the Death of Privacy and the Rise of Surveillance Threaten Us and Our Democracy, I argue that algorithms should be open-source and thus publicly available for examination.

The reason so many algorithms are so toxic is because they are fine tuned or adjusted to maximize engagement to benefit advertisers, who then pay the social media company.

But if a pay-for-play membership fee was put into place to fund the social media site, like Elon Musk has flirted with, it could significantly diminish the pressure to have a toxic algorithm running things.

Nigel Peacock and I saw this at work for the nearly two decades that we ran over 20 forums on CompuServe back in the 1980s and '90s. Everybody there paid a membership fee to CompuServe, so we had no incentive to try to manipulate their experience beyond normal moderation. There was no algorithm driving the show.

It would also reduce the amount of screen time and the level of "screen addiction" so many people experience with regard to social media, free up both personal and social media time and resources, all while maintaining revenues for the social media site and reducing the incentives toward misinformation and radicalization.

But lacking a change in business model, the unique power social media holds to change behavior for good or ill — from Twitter spreading the Arab Spring, to Facebook provoking a mass slaughter in Myanmar, to both helping Russia elect Donald Trump in 2016 — cries out for regulation, transparency, or, preferably, both.

Ten months ago, U.S. Senator Ron Wyden, D-Ore., with Senator Cory Booker, D-N.J., and Representative Yvette Clarke, D-N.Y., introduced the Algorithmic Accountability Act of 2022 which would do just that.

"Too often, Big Tech's algorithms put profits before people, from negatively impacting young people's mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between," said Senator Tammy Baldwin, a co-sponsor of the legislation.




"It is long past time," she added, "for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers."

And, let's not forget, to protect our democracy, our nation, and our planet.

The people who own our social media, often focused more on revenue than the consequences of their algorithms, don't seem particularly concerned about these issues.

But we must be.