Perspectives on Technology

Michelle Calabro Michelle Calabro

On Cybernetics

This was originally posted as a thread on X/Twitter, written in the span of 2 hours early this morning. I’m re-posting it here, so it’s easier and more accessible for people to read.

+++++

I’d like this to look less at the UI Design and more at the UX Design. The problem is less about the visual design hierarchy, which directs a user’s attention, and more about choice architectures available to users.

Which options are you giving me? Are you forcing me to do something I don’t want to do? How easy is it to reverse a decision?

All-or-Nothing/Never Back Down choice architectures do not offer meaningful choice to the user. This diminishes their sense of agency. It makes them feel forced.

Maybe you’re not giving them a choice, and in the moment they don’t care. But maybe that’s because they don’t know what’s really happening. Yet if they did know what was really happening, they’d never do the thing you want them to do.

In that situation they chose to do something but they didn’t know what they were choosing. This is a lack of informed consent.

Good parents are trusted to make decisions for their children because children don’t know enough about the world to be able to make informed decisions for themselves. They can’t be expected to understand the consequences of their decisions until they’re older.

It’s the parents’ obligation to make decisions that protect their children precisely because of this. Children (and some disabled people) are the only ones whose decisions should be managed by adults.

Obviously, there are plenty of bad parents and just plain horrible people who make bad decisions for their kids.

But no adult has the right to manage (or manipulate) another adult’s decisions, by arbitrarily or manipulatively limiting choices, intentionally withholding critical information, manipulating users’ expectations or using other nefarious tactics.

It’s unacceptable for someone to make decisions on someone else’s behalf because everyone should be able to direct the outcomes of their own lives.

If they can’t do that, then on what basis would we be holding them accountable? They’re not responsible since they didn’t actively choose to do it. That’s how some people get away with doing very bad things: by pleading insanity.

Rational choices are made when you understand the consequences of your actions, you weigh the pros and cons, and you decide what’s best for you.

Yet, we are mortal beings who will die someday and it’s impossible for an individual to make a decision that considers all of the possible outcomes.

Many outcomes are unknown or unknowable, and everyone has biases. There are many other kinds of biases— institutional bias, publication bias, etc. Your biases make you, you. Our collective biases make us, us.

The people I’ve seen who were most effective at making decisions had values, principles and personal philosophies that they used to help speed up decision-making. They’d spent many years crafting those values, principles and personal philosophies.

Yet they were also able to recognize when their way of doing things needed to change. They had a way of acknowledging it and deciding to do something about it.

If a system is always making decisions for you, then on what basis will you grow or change? On what premise? By whose values, principles and personal philosophies? Well, the designer of the system, of course.

The more complex the system, the more of the designer’s personal philosophy is incorporated into it.

The complexity is not only comprised of the hardware and software affordances, but also the organizational, cultural, political, social, psychological, medical, economic affordances of the company and countries within which they exist.

Some designers love and appreciate complex systems because they’re like philosophical treatises or entire worlds, with grammar, syntax, logic, characters, narratives, goals, challenges, currencies.

Some designers love designing complex systems. Yet when we look at the career trajectories of most designers, and at corporate organizational structures, almost no designer has the privilege of making decisions about an entire complex product.

Only authoritarian, overly-privileged designers are in positions to make the enormous number of decisions that get made in order to create complex technological systems.

This style of employee should not go unchecked. Why? We live in a democracy and this style of leadership is incompatible with our laws and way of life.

It’s also strikingly incompatible with people’s expectations of each other, our systems and… ‘the system.’

People expect products to be designed for them. They expect products to solve their real problems. Most of the time, those problems are emotional. And the way a person decides whether a product will solve their problem or not— that’s emotional too.

People who openly acknowledge the emotional dimension of product design, policy design, system design, and who openly acknowledge the emotional dimension of our own (and others’) decision-making, can see how cybernetic systems come to feel how they feel when we interact with them.

Governance systems are cybernetic systems. We want our governance systems to be designed intentionally, based on how we want to feel as citizens of this country. Not as a function of happenstance, or the power divide amongst political parties, or vacation schedules.

I like the smoothness of an elegant and well-designed system. I like low friction. It’s calming. But do I ever experience those systems in real life? No. They’re the ideal Platonic forms of systems that only exist in my mind. All systems are broken, in one way or another.

When you focus on this fact for too long, you start to get irritable. What do you do about that? Create the aesthetic experience of smoothness by controlling yourself, not your surroundings. You are the only one you can try to control.

The fact is, none of us are fully in control of ourselves. And when many of us experience aesthetic un-smoothness, we use negative coping mechanisms to deal with how irritable it makes us feel. We’re trying to get away from the bad feelings so we numb ourselves.

But when you’re numb you can’t solve anything because you can’t feel anything. You’re not calibrated to the feelings anymore— the feelings you intentionally distanced yourself from.

Paying attention to, describing, navigating and managing the feelings is how you solve the problems. It’s painful for all of us. It’s painful AND difficult for some of us.

No one is born with words that describe how they feel. Yet without words, we aren’t able to manage our emotions. Look for patterns amongst body sensations, the way you express yourself differently when you feel differently, the words people use in connection with those things.

Once you have labels for physical feelings, you can look for patterns. What kinds of situations make you feel good, not good? Once you see a pattern you can make a list of coping mechanisms to use in the future when that kind of situation comes up.

The smoother it feels, the lighter you feel, the less you have to worry about. The calmer you feel in your body.

It’s not only our ongoing relationships with broken technology systems that makes us irritable. It’s also our interactions with other people. In places like this app, we’re exposed to people with many, many different views, because they’ve had very different life experiences.

My experiences are not more valid than theirs, nor are they less valid. If I want to do something to create the aesthetic sense of smoothness for myself in a situation, but it creates turbulence for someone else, is that ok?

Ethical emotional intelligence considers more than just how each of us feels emotionally, in the moment. It also considers the nature of our relationship and of this specific interaction, and it also considers the true motivations each of us has for interacting.

Did we choose to be here? Were we forced to be here? How can we be here peacefully?

Am I obligated to give equal consideration to everyone on this app [X/Twitter]? No. They can express themselves freely, but I’m not legally obligated to listen. I can mute or block them.

What if I block everyone I disagree with, or who interrupts the smoothness?

I used to think algorithmic internet tunnels of confirmation bias were bad. Maybe at one point they were. But today in the US, I don’t know if I think that anymore.

We’ve given up on making good faith arguments to authentically persuade each other.

And although the content of our arguments is almost entirely comprised of ego-plays and nonverbal emotional expressions, they go unacknowledged. Most of the time we keep fighting because we aren’t addressing the thing we’re really fighting about.

Plenty of communication and negotiation frameworks exist. That’s not the problem.

It’s that we don’t have an etiquette for how to disagree with strangers on the internet. It’s not just because the internet is relatively new, or because people behave differently online than offline. It’s because suddenly we’re connected to almost everyone else on Earth.

Who gets to decide the etiquette for Earth? If we don’t have etiquette we can never get to laws, because we won’t talk to each other long enough to write them.

Read More
Michelle Calabro Michelle Calabro

Harassment of Women on Twitter

On Twitter in the last several months, there’s been an increase in online harassment and cruelty to women who are perceived threats to certain groups’ sense of superiority. The harassment can occur as harassing messages or as the algorithmic context and cadence within which seemingly benign messages are read.


Harassing Messages

Women in the child-bearing age range get harassed about arranged marriages (in cultures where this isn’t a common practice), forced pregnancies and pre-emptive maternity leave (when the woman isn’t pregnant or planning to get pregnant). Increased messaging about hormonal imbalances, menopause and post-menopause create a sense of urgency, paranoia, desperation, shame, fear and a targeted woman’s willingness to comply with the demands of bad actors. The fact that none of these medical phenomena applies to her, specifically, is entirely beyond the bad actors’ point, since facts were never the basis of any of their arguments. Psychological manipulation was. These messages reduce a woman’s worth to her child-bearing and otherwise biological potential, removing her sense of freedom, dignity, autonomy, aspirations and desire. They’re meant to intimidate, silence and stifle the behavior of women, cut their careers short, and deflate their confidence.

Some women get messages online accusing them of being men before. Even if they’re transgender, this is hate speech. It’s a masculinizing reaction to a woman’s normal behavior, meant to discourage her from advocating for her own needs, using her voice, and exercising her right to express herself freely.

Although it’s likely that most online harassment of women originates from men, they aren’t the only ones harassing women online. There are even some female influencers online who appeal to other women’s insecurities and fears to sell products, creating an environment of shame, cruelty and lowered self esteem.

Take Action:

Learn about the mute and block functions on Twitter and mute the harassing words. Unfortunately, if you’re being targeted by someone, you will be re-triggered every time they find a new word or phrase to harass you with.


Algorithmic Context and Cadence of Medical Messages

Another form of online harassment women face on Twitter is not about the messages themselves, but about the context and cadence within which they appear. For example, seemingly benign health-related messages that originate from healthcare workers, therapists, pharmaceutical brands and healthcare organizations who are marketing their products and services can be perceived as harassment depending on their placement in relation to other posts.

As marketers are well aware, the messages one receives online (and offline) affect their thoughts, and thoughts affect actions. Content marketing can be labeled as such by the brand or not. If it’s not labeled and it appears in the feed alongside other posts that seem to be related, it can be misinterpreted. If the same themes of messages appear over time and seem to increase in cadence, the user can experience it as harassment.

The increased occurrence of seemingly benign PSA-style content about cancer, Alzheimer’s disease and other mental health and medical issues can create paranoia and increased appetite for risk in targeted social media users. Additionally, increased messaging related to cancer and other terminal diseases has a profound effect on targets’ sense of their own future and assessment of risk. Those who believe their life is coming to an end may be more willing to take highly risky action that aligns with the agenda of the bad actors algorithmically targeting them with messages. A similar phenomenon has been documented in the context of radicalized groups in non-US countries.

The timely and increased occurrence of messaging themed around amnesia, forgetting, forgetfulness and Alzheimer’s disease as a way of encouraging targets to forget the harassment that has been done to them online.

Messages that interfere with a woman’s connection to, and self-control of her own body create isolation, trauma, shame, paranoia, confusion. They can lead her down a costly and time-consuming path of wrong diagnoses which can lessen her ability to perform basic tasks at work and at home.

Take Action:

If a woman has detected that she’s being targeted by medical harassment, she can take action to collect facts and evidence to assess whether there is or is not a reason to worry about her health. Unfortunately it will take time, and the financial and psychological costs of this medical due diligence are quite significant, yet it will help to minimize harms created by medical misinformation and disinformation.

Find healthcare professionals who you can trust on the grounds of their track-record of consistently:

  1. conducting appropriate tests

  2. interpreting those tests accurately,

  3. communicating the results of the tests to you in a manner that you understand

  4. providing an effective plan of action to take care of your health

  5. keeping all of your data private under the most thorough security measures

  6. not accepting bribes or outside influence by individuals or groups other than yourself.

If a healthcare provider fails to do any one of these things effectively, you should find another one. Once you’ve found a healthcare provider you can trust and they’ve done appropriate tests, there is a lower probability that you can be swayed by medical harassment online.


Violent Words

Violent words can be used to harass women and men. They change the context within which other messages are interpreted, and create unwelcome negative distractions which lead to an overall negative experience of the product. The mute function can be used to eliminate violent words from the feed, but if a user is being targeted, each time a new violent word occurs, they’ll be re-triggered. I’ve compiled a list of violent words users can add to the mute list if they choose to reduce the occurrence of violent messages in their feed, and would prefer not to be re-triggered every time a new one occurs.

Take Action:

Copy and paste each of these words into the list one by one, using the following path:

Click on your Twitter profile photo
—> settings and support
—> settings and privacy
—> privacy and safety
—> mute and block
—> muted words
—> add

A forgiver

Abusing

Alzheimer’s

Amnesia

Anti-science

Apocalyptic

Armed

Armed forces

Arms race

Arrest history

Arson

Assassin

Assassination

Assault

Assaulted

At risk

Axe

B***h

Bankruptcy

Battle

Be patient

Bed wetters

Bed wetting

Biggest threats

Bitch

Boil the frog

Breaking point

Brothel

Brutalized

Bullet

Bullies

Buzz

Cake

Cancer

Captured

Cheat

Child porn

Child pornography

Cold world

Collapse

Come out

Coming out

Convict

Convicts

Coup

Crashed

Crashed into

Crime

Criminals

Crusade

Cry

Culling the herd

Cyberattacks

Dead

Deadliest

Deadly

Deadly

Death

Deaths

Deaths

Deepfake

Defeated

Defeating

Demon

Demons

Derail

Derails

Didn’t happen

Disaster

Doesn’t kill you

Don’t remember

Drown

Drowned

Drowning

Drugs

Elephant in the room

Emergency

Evil

Extinction

Extremist

Failed

Fallout

Fat

Fertility

Fire

Fired up

Flee

Force

Forcedtoflee

Forgive

Freed

Freedom

Freedom fighter

Give up

Government trust

Grown-ups

Heart attack

Homebuyers

Hormonal

Hormones

Hormones

Horror

Hostage

Human remains

Hunt them down

Hypersonic

Idiot

Injured

IQ

Killed

Killer

Killing

Knife

Knife attack

Lethal

Lost her life

Lost his life

Lost their lives

Make up your own mind

Manipulate you

Maternity leave

Medical

Mental

Missile

More dangerous to

More than you realize

Most expensive

Mouth breather

Murder

Murdered

Murderer

N*gger

Never be able to escape

Nigger

No hard feelings

Not everyone

Nuclear

Nuke

Nukes

Occupied

Only option

Outofcontext

Outrage

Outraged

Overweight

Pasta

Pathological

Patriot front

Penitentiary

Pimp

PMS

Prison

Prisons

Psy op

Psyop

Quicksand

Rape

Raping

Rebellion

Redneck

Resist

Retire

Retirement

Roasted

Salary

Savage

Savagely

Savages

Scandal

Scapegoat

Security threat

Seized control

Self care

Self-care

Shadow

Shadows

Shoot

Shot

Shut up

Simple life

Sink

Sinking

Smelliest

Soul sucking

Soul-sucking

Sphere of influence

Stab

Stabbed

Stabbing

Starvation

Starve

Starved

Steal

Stolen

Stop believing

Struggles to find

Stupid

Suicidal

Suicide

Survived by

Survivor

Terrorism

Terrorist

The resistance

Theft

Thief

Thieves

Threat

Threaten

Threatening

Thug

Thugs

Torched

Toxic

Tragedy

Tragic

Tragically

Tribal

Tribalism

Trust nobody

Trust the government

Trust your government

Vilify

Violence

Violent

Weapons

Weep

Went missing

Wildfire

Witch

You need

Your forgiveness

Your own country


Why is the harassment of women on Twitter bad for business, you ask? Because if half the world’s population is comprised of women, and they’re getting harassed on Twitter, they’ll stop using it.

Single women within a certain age range have money to spend, and the lure of Twitter as a source of information and inspiration can only be capitalized on if their dignity, autonomy, aspirations and desires are acknowledged and supported. They want to feel good about how they spend their time online, and about what they spend their money on. Brands can’t build a trustful relationship with women and play a meaningful role in their lives, if they meet them in online spaces that diminish their confidence and sense of safety.

Twitter needs a competent Trust and Safety team to protect women from these harms and others, and provide tools to detect, document and attribute the sources of online abuse.

Read More
Michelle Calabro Michelle Calabro

On ChatGPT Impersonation

Unfortunately ChatGPT can be used to convincingly impersonate individuals, and if deployed in settings like DMs and text messages, the personal information shared in those messages with the reasonable expectation of privacy, has now been made publicly available. This puts the identities of innocent people at risk. Even when you think you’re not sharing sensitive information about yourself, seemingly innocuous data about you can be weaponized against you, if it falls into the wrong hands.

Read More
Michelle Calabro Michelle Calabro

Twitter— Privacy, Free Speech and Profitability

Twitter should rewrite its policies on users’ privacy and content moderation while simultaneously creating an environment that is attractive to advertisers and investors. It’s not easy to balance these two sets of things because they’re at odds with each other. But this is the balance I think it needs in order to survive. Free speech is also an important component, and it can be at odds with some content moderation policies. But there’s a balance to strike here as well.

Read More
Michelle Calabro Michelle Calabro

Toward a Cyberbullying Safety Tool

Kids need tools to detect and document cyber bullying within and across social media platforms. How else will parents defend their kids if they don’t know they’re being targeted and if they don’t have court-admissible evidence? Given the complexity and subtleties of QAnon logic, conspiracy theories, and other cyber bullying techniques, documentation must account for the signifiers, signifieds, and context within which they gain their meaning. These ever-evolving semiotic gestalts must be distinguishable from “normal” posts, and ideally decipherable. This is how parents and authorities would be able to figure out how the kids were impacted. And hopefully it could help parents correct the wrong beliefs and self-esteem damage that was done to their children. Ideally such a tool would also determine the origin/attribution/author of the attacks, since even a single nefarious user can appear to create posts using different real and fake accounts.

Read More
Michelle Calabro Michelle Calabro

Thoughts on the Future of US AI Policy

The United States must realign our techno-governance system with values that allow democracy to flourish so the many diverse groups and ideologies that comprise our country can peacefully coexist; and so we can build more effective cooperation across global democracies while exploring our common ground with other government types.

The United States must realign our techno-governance system with values that allow democracy to flourish so the many diverse groups and ideologies that comprise our country can peacefully coexist; and so we can build more effective cooperation across global democracies while exploring our common ground with other government types. To make this happen, the US should create an ecosystem consisting of: 

    • stakeholder capitalism, with accompanying company valuation mechanics that incentivize it to gain popularity (this is a market-driven option, as opposed to CEO- or government-driven options)

    • an AI Regulatory team on the federal level

    • ethical engineering standards and federal laws that are relevant to the current developments in technology

    • participatory governance platforms and mechanisms (similar to those used in Taiwan). Participatory government speeds up the government’s ability to respond to the needs of the people, and increases government trust and effectiveness.

    • shared AI resources (such as those proposed for the National AI Research Resource) that are controlled and monitored

    • government-funded independent auditors (government-funded auditing reduces conflicts of interest)

    • certified ethical engineering professionals (standardizing ethics in the profession keeps engineers informed of changes in ethical practice, and creates an external standard to which the engineer can be held accountable)

    • mandatory ethics training in engineering educational programs

Taken as a whole, this ecosystem would speed up the government’s ability to create relevant tech policies, and reinforce accountability across corporations and the government in their service to the American people, our economy, our peaceful relations with other nations, and the planet. Through holistic system design, when multiple kinds of entities share the responsibility of AI governance, the effectiveness of each individual entity increases, and the societal impact of the negative biases of each entity are diminished. By sharing our responsibility in creating a better future, we create career opportunities for more diverse people to contribute and incorporate the many views that will make that inclusive future possible.

My perspective is informed by systems thinking: looking at the system of stakeholders, rules and interactions holistically, designing incentive systems and accountability mechanisms that are humanist (versus transhumanist), inclusive, human rights- and democracy-reinforcing. The techno-governance systems that currently run Americans’ lives are shockingly lacking in their inclusion of women’s and minorities’ perspectives. It’s crucial that more diverse people imagine our positive and humanity-reinforcing future with technology— independent of the profit motive of a single organization or the religious or political ideology of a single group— then use that foresight to inform US policies that benefit the American people, our economy, our peaceful relations with other nations, and the planet.

Artificial general intelligence may soon impact every arena of human life including cultural, social, anthropological, psychological, medical, economic, political, military, spiritual, religious, and every other arena imaginable, including those that haven’t yet been imagined. The possibility of AGI is not guaranteed, and it is important to consider whether artificial general intelligence should exist. Careful thought should be dedicated to intentionally designing policies that encourage a future that all humans want to live in, and which brings out the best in human ability and the human experience. It’s difficult to create policies around a future that is almost unimaginable; it is must more practical to create a method for keeping the public informed of concrete developments toward AGI, and a system of checks and balances that evolves as the technology evolves. Toward this end, I propose that all companies whose technologies and datasets position them to contribute significantly to the development of AGI, should be required to host regular public conversations (not broadcasts) to update the public and understand people’s hopes for AGI systems, their questions and concerns. Although it isn’t possible for a company to anticipate the needs of the entire American public (or all of humanity), it can do its due diligence to understand through open, non-technical conversations that reflect their current developments toward AGI. The US federal AI regulatory team must hold AGI developers accountable to the public’s hopes and concerns.

Read More
Michelle Calabro Michelle Calabro

We need robust federal protections against cyberbullying.

Cyberbullying is a psychologically torturous crime that can drive victims to suicide. When a victim doesn’t know who is harassing them or why they are being harassed, they internalize the messages, which leads to the destruction of their self esteem. The sense of being surrounded by attack causes the victim to retreat from online forums even though they may rely on those forums for school, work, and staying connected to family and friends. When this happens, the victim can fail in school, lose their job, avoid seeking new employment, and lose valuable supportive relationships. Some victims go through these experiences without ever telling another person, because of shame or fear of retaliation.

Cyberbullying is a psychologically torturous crime that can drive victims to suicide. When a victim doesn’t know who is harassing them or why they are being harassed, they internalize the messages, which leads to the destruction of their self esteem. The sense of being surrounded by attack causes the victim to retreat from online forums even though they may rely on those forums for school, work, and staying connected to family and friends. When this happens, the victim can fail in school, lose their job, avoid seeking new employment, and lose valuable supportive relationships. Some victims go through these experiences without ever telling another person, because of shame or fear of retaliation.

According to stopbullying.gov, “no federal law directly addresses [cyber] bullying.” Groups currently protecting Americans against cyberbullying are focusing their efforts on 3 populations: young people; journalists and activists; elected officials (especially women) whose perpetrators attempt to interfere with the democratic process by bullying them. 

Victims are reliant on state harassment laws which allude to cyberbullying but don’t explicitly address it, but the internet that Americans use is not delineated by state boundaries, and harassment can cross state or country lines to negatively affect American citizens.

Another path to protection are the policies of tech companies, but the Terms of Service are ever-changing. We cannot allow the dignity and mental health of Americans to be left to the whims of the personal preferences of tech CEOs and soft laws that manifest them.

We need robust federal protections against cyber-bullying that protect all Americans, and are not reliant on the limited coverage afforded by individual state laws, or the changing whims of tech CEO’s. 

Read More
Michelle Calabro Michelle Calabro

On Nudging Systems, During the COVID19 Pandemic

When we’re changing people’s behavior for ‘the good,’ who gets to decide what ‘the good’ is? The developers of systems that are capable of providing nudges are companies and governments. The design decisions around nudges will always optimize for the continued profit of those companies and power of those governments. It’s the people in control of the technology system that make ultimate decisions about what choices and possibilities are made available to users of that system. And the people in control do not usually prioritize an individual’s interests over its bottom line or continued power. Yet an antagonistic cybernetic relationship between the organization in control, the technology system, and the user, will lead to distrust and ultimately, conflict.

Nudges can be used as a tool for persuading collective action, where collective action is beneficial for the wellbeing of a society at large. American culture is highly individualistic, yet individualism has its faults and individual freedoms should not be prioritized over collectivism in all situations. Therefore, nudges that encourage an individual user to make decisions that optimize for the wellbeing of society over their own short-term desires, can be ethically delivered with the proper transparency and democracy-reinforcing controls. From my perspective, Americans’ experience of the coronavirus pandemic has shown us the devastating consequences of extreme individualism and distrust of government. The government’s A/IS nudging systems of the future must take concrete action to cultivate trust.



We are able to comment on what kinds of influence are ethical versus not ethical, based on internationally agreed-upon frameworks such as the International Declaration of Human Rights. However, nudges may create harms that were not anticipated by such international frameworks; therefore, developers of A/IS Nudges must create processes and systems for listening to users’ criticisms, their negative experiences, and feedback, and cultivating an ongoing process of continuously engaged learning about how best to serve people.



We should delineate ethical methods of influence, based on transparency and UX design decisions that reinforce users’ control and agency, to promote users’ trust in systems that nudge. If our aim is not to protect the human rights of users, then what is our differentiating contribution to this topic? Creators of A/IS systems are always going to consider their organization’s stakeholders, and they will always validate and rationalize their decisions for making nudges. What is the unique value that we will provide to our readers?



Freedom of Expression is a human right that is directly related to A/IS nudging. It would be convenient for developers to assume that users are fully expressive of their opinions, thoughts and beliefs while interacting with A/IS systems. Nudges are delivered based on available user behavioral data and the current affordances of existing technology systems. However, the assumption of openness and sharing is not founded in reality. As trust in technology companies and the government continues to decline, users will share less and less of their opinions online, and in systems that they suspect to have back doors, or that share their data with third parties that negatively affect their lives in material ways. With diminished trust in systems will come increasingly defensive, combative and subversive user behavior. 



It’s irresponsible to deliver highly consequential nudges based on incomplete data. What makes a nudge highly consequential? If it impacts a user’s health or safety, or the health and safety of their dependents. Creators of these types of nudges must earn users’ trust in order to encourage safe Freedom of Expression, and more accurate data on which to base nudges. 



It’s irresponsible to deliver nudges based on emotion recognition inferences, which has been shown to be scientifically questionable. We’ve seen recent relevant reports from Kate Crawford, Article 19 and several Congresspeople have introduced regulatory legislation in Congress within the last year.

Read More
Michelle Calabro Michelle Calabro

On Affective Computing

You interact with artificial intelligence systems every day, but you may not always know it. Some of these systems actively seek to understand your beliefs and emotions. When you are most receptive, they persuade you on what to believe or how to behave. These practices raise concerns of human agency and our democracy.



Affective Computing, also called ‘emotion recognition,’ is the development of Artificial Intelligence systems on your smartphone, computer and wearable devices that aim to recognize, interpret, process, and simulate human affects (although with questionable validity). They analyze your words and behavioral biometric data (eg. facial expressions, eye movements, voice quality, sweat, brainwaves).



Affective Computing could be harmful, especially in high-stakes situations in which an AI system makes decisions that affect a person’s physical or mental health or financial situation. During the pandemic, Americans are even more vulnerable to manipulation because they are spending a lot more time working and playing games on the internet, and mental health issues are increasing.



Sentiment analysis; augmented, virtual and extended reality systems; smart televisions; brain-computer interfaces and other sensors may be used to make inferences about video game players’ emotional state, yet the validity of these inferences is scientifically questionable. Although emotion recognition technologies are available for use in entertainment and gaming environments, it is unclear to what extent this practice is being done today, how it’s expected to evolve in the next several years, and the human rights implications. How does emotion recognition in entertainment and gaming present unique challenges to players’ right to privacy, freedom of expression and other human rights?



Senator Gillibrand’s well-strategized S.3300 - Data Protection Act of 2020 proposes to create a Federal Agency to protect individuals’ privacy rights. A few amendments could help it gain bipartisan support and address the concerns outlined above. It restricts, “any processing of biometric data for the purpose of uniquely identifying an individual.” I recommend reframing it as an acceptable practice if done by a Federal, State or local government agency for national security purposes. S.3300 should restrict the collection and use of behavioral biometrics for Affective Computing, applied to marketing and entertainment purposes. S.3300 restricts, “the use of personal data of children or other vulnerable individuals.” This could get implemented with an on-device ID that restricts downloads by protected groups. Senators Merkley and Sanders’ S.4400 does an excellent job of outlining different types of biometric and genetic data. Senator Booker’s S.2689 demonstrates how behavioral biometrics can be used to re-entrench pre-existing racial biases. I recommend Senators Gillibrand, Merkley, Sanders and Booker work together to further develop S.3300.

Read More
Michelle Calabro Michelle Calabro

The Future of Human Computer Interaction

In January - March of 2020, I was contracted by RLab to produce a report on The Future of Human Computer Interaction which, “focused on trends affecting hardware and software interfaces, to spread awareness of RLab’s expertise in the areas of XR and spatial computing, as well as further our connections within New York City’s ecosystem.”


INTRODUCTION

Each of my conversations with the interviewees had a beautiful undercurrent of innovation, and I’ve enjoyed thinking about the future of human computer interaction. I’ve organized my initial thoughts focused on

trends affecting hardware and software interfaces, to spread awareness of RLab’s expertise in the areas of XR and spatial computing, as well as further our connections within New York City’s ecosystem

since I want to make sure I’m calling attention to things I learned that could help RLab support education, entrepreneurship and innovation around XR/Spatial Computing technologies.

INNOVATION

As RLab continues to support XR/Spatial Computing innovation in New York City, what are some themes that we might want to tell others to pay attention to?

Truth and disinformation are main concerns for those practicing data analytics and storytelling. John Peters questions, “How do you actually pay for news because everybody expects something online to be free? I think it's our civic duty to be taxed. Just the way our taxes pay for schools, our taxes should pay for the news or at least public information.” Media companies experimenting with XR such as USA Today must continue to ask themselves if their business goals erode journalistic integrity. How can a newsroom create useful, impactful, personalized and fast news that makes someone love their brand, while still maintaining truth and journalistic integrity? I fear that the pursuit of the combination of utility, impact, personalization and speed -- would lead to a deeply unethical journalistic practice. I personally believe the team at USA Today’s approach is more appropriate for a long-form documentary or gamelike experience and not news.

Trust continues to be a theme when speaking about our interaction with systems, whether we’re observing the phenomenon of trust between the user and the technology system, or between the user and the business or government institution that created it. Several interviewees have expressed a strong distrust in systems, drawing from the long-standing tradition of cybernetic theory. Others have spoken about trust from a research perspective. Jeffrey Heer’s research into human agency, trust and certainty provokes questions about how our over-trust in systems leads to humans becoming less introspective, and thinking less critically.

Technologies create imbalances of power. As John Peters notes, “most technologies have just ended up differentially empowering different classes over others.” Unsurprisingly, when interviewees have spoken about inclusion, it has seemed to inspire them to think more optimistically about the future. Our interviewees want a future powered by technologies that include everyone of all races, genders, socioeconomic backgrounds, physical abilities and disabilities, etc. Luke Dubois is most interested in using VR for therapy, to create experiences that help people with various abilities and disabilities by sensorily transporting them so they feel centered, relaxed, balanced and supported, even if they’re in high-stress situations.

Data Governance and Data Rights were themes we heard many interviewees speak about. In cyber physical space, who will have access to which information about which people? How can we delineate boundaries of data archives that create meaningful engagement in local, state, national and global communities? What are our data rights and how can they be protected? Steve Feiner spoke about data rights in public space; he’s concerned about the possibility that tiny seemingly insignificant pieces of data (that have time/location metadata) from multiple devices could get pieced together to spy on someone by corporations or governments. As Ken Perlin put it, “What are the rights and privacy issues around technology when I could know everything about you?”

The pursuit of Cyber Physical Space (a term we’ve heard from interviewees in the Asia Pacific Region and previously known as the AR Cloud) was mentioned by many interviewees. Books, tv shows and films have helped to create this shared image of the future in cyber physical space, and interviewees have different perspectives on how best to move toward this future. A very interesting point we heard from the Hakuhodo team is how Japanese religious beliefs impact their relationship with machines. They imagine many gods in the world around them, and they view XR as a way to inhabit that imaginary world. Therefore, their company’s mission to create cyber physical space is one they pursue without reserve. They know that other cultures have hang-ups that slow innovation toward this goal. Although Genevieve Bell is also located in APAC, she is busy teaching students to ask ethical questions that lead to a more equitable future in cyber physical space.

The environmental impacts of innovation are cause for concern. Mark Parsons thinks efficiency, carbon neutrality, less waste, lower costs to building and quicker returns on investments are the big changes that we're going to see in the next 10 years. Amy LaMeyer is most excited about the ways that XR technologies could help us create less waste/garbage, and collaboration and telephony tools help us lessen carbon emissions from international travel.

EDUCATION

When we’ve talked to folks about how best to approach teaching XR/Spatial Computing, we’ve heard many people say the main challenge is mediating, supporting and creating the inevitable tension that arises from bringing together a diverse group of students. Negotiating the tension is critical to supporting inclusive design environments that lead to inclusively-designed systems. I believe we heard the most nuanced, actionable directives from Genevieve Bell, Jeffrey Heer and Stephanie Dinkins.

Another consideration for education is how the lessons get translated into jobs and the job market. Genevieve Bell mentions that graduates of 3Ai have gone on to work for governments and companies. My opinion is that one can only hope that their organizations’ structures and their roles in those structures support the ability to ask critical ethics questions and make real change. Stephanie Dinkins creates compelling, provocative work that invites people to ask valuable questions about power, culture and inclusion yet there is not an established pathway for other artists to do the same. She is currently focused on helping to create that pathway for other artists.

I’ll not name interviewees here, but we heard a few professors speak about their distrust for companies and the government. I personally am very sympathetic to many of the opinions they expressed. However, I’m left wondering, “Is it responsible to teach students to distrust the work that they’ll inevitably end up doing for companies and the government, without giving them a way to make positive change within those institutions or others?” In my ideal world, RLab would connect these people with each other to facilitate conversation around regulation and policy. I think it would be very interesting and important to speak to people who are currently working in AI Ethics/Data Regulation. RLab could also support artists in creating speculative and critical work, like what Stephanie Dinkins creates, to continue to engage the wider public in these conversations.

CONCLUSION

There is a need to balance the speed of innovation with the asking of thoughtful questions about how our inventions might impact individuals, communities and the earth. Just because we can invent something does not mean that we should. How can we educate companies and governments on how to ask these questions, and when to discontinue innovation projects that would create more harm than good? How can RLab continue to make money even if we recommend that an innovation project be discontinued? Can innovators create business models that don’t rely on shipping products fast? We need regulations to protect our democracy, our individual rights, and our country.

Read More
Michelle Calabro Michelle Calabro

We need to build Trust and Accountability into the use of AI.

Whether people know it or not, Artificial Intelligence powers much of the systems that run our public and private lives, in physical space and in virtual space. It’s integrated into home electronics like dishwashers, refrigerators, thermostats, and lighting systems. It sits on our countertops, listens to our voice commands, plays music and tells us jokes. It learns about the products we like and suggests more things to buy. It predicts the questions we want to ask Google, before we even finish typing. It helps movie production companies place smart financial bets on the best stories to tell. It helps us get smarter about how to run our cities. It drives large shipping trucks (without sleepy drivers inside) across American highways, and even optimizes the shipping company’s logistics too. In our factories, Artificial Intelligence-powered robots make the products. In our farmlands, it optimizes production. In New York City, Artificial Intelligence has taken over one of our most iconic scenes. Look at videos of the New York Stock Exchange (NYSE) during the 70’s, 80’s or 90’s. The trading floor was vibrant, flourishing, and exciting back then. Today? The floor is devoid of traders and is only used as a backdrop for business news programs and NYSE photo ops.

This piece, co-authored by Michelle Calabro and Ryan Carrier of ForHumanity, is adapted from our response to the NYC Economic Development Corporation’s Request for Expressions of Interest in operating a Center for Responsible AI. The submission was a joint effort between ForHumanity, The Future Society and Michelle Calabro.



The proposal has neither been accepted nor rejected at the time of this publication. None of the authors or organizations involved have any affiliation or connection to the SEC.


Whether people know it or not, Artificial Intelligence powers much of the systems that run our public and private lives, in physical space and in virtual space. It’s integrated into home electronics like dishwashers, refrigerators, thermostats, and lighting systems. It sits on our countertops, listens to our voice commands, plays music and tells us jokes. It learns about the products we like and suggests more things to buy. It predicts the questions we want to ask Google, before we even finish typing. It helps movie production companies place smart financial bets on the best stories to tell. It helps us get smarter about how to run our cities. It drives large shipping trucks (without sleepy drivers inside) across American highways, and even optimizes the shipping company’s logistics too. In our factories, Artificial Intelligence-powered robots make the products. In our farmlands, it optimizes production. In New York City, Artificial Intelligence has taken over one of our most iconic scenes. Look at videos of the New York Stock Exchange (NYSE) during the 70’s, 80’s or 90’s. The trading floor was vibrant, flourishing, and exciting back then. Today? The floor is devoid of traders and is only used as a backdrop for business news programs and NYSE photo ops.



For most of human history, public spaces like crossroads and town squares used to be where people met to exchange information and ideas. Yet since the early 1990’s, the internet has been the place where individuals and groups across the world could connect; physical location has played less of a role in shaping cultural movements. In the early days of the internet, we thought of it as a relatively neutral ‘virtual place’ (free from commercial interests and surveillance) where we could connect to like-minded people, no matter where in the world they were located. In the last decade with the availability of big data, stronger computing power and refreshed optimism about AI research, the internet has become a different kind of place. It is infused with the interests of the entities that create and influence it — their culture, ethics, values, ideologies, local laws, profit motives, regulations, etc.; and new practices that can twist culture such as micro-targeting, voter suppression, and adaptive online content. As New Yorkers, we pride ourselves on the diversity of our population, and the tolerance for ‘otherness’ that is required to peacefully coexist. But that tolerance is being put to the test — Artificial Intelligence has been used to lower people’s tolerance for perspectives unlike their own. Harvard researchers have found that in 2016, “major spikes in outright fabrication and misleading information proliferated online, with people using warlike rhetoric in social media posts.”



Over recent years, we have seen numerous examples of negative outcomes from AI systems created by corporations: “Tay”, the racist chatbot from Microsoft that they shut down less than 24 hours after inception; Amazon’s hiring algorithm that was based on their own hiring data which was quickly shut down because it was biased; Google’s external Ethics Board which was forced to shutter only one week after launching in Spring 2019; and Facebook being fined $5 billion for the misuse of user data connected to privacy laws in July 2019. Regardless of the intent of corporations to be socially responsible, they exist to benefit shareholders and their primary goal is to achieve profits. The causes of these negative outcomes have to do with the technologies themselves, the data that was used, the interests that drove system design, and many more contributing factors.



What does it mean to create Responsible AI, and how can we hold companies accountable? Existing laws are proving insufficient, The United States does not currently have regulations on data security and the responsible use of AI, Congress is under-educated about the matter, universities train students to ask these questions but no one has definitive answers, and consumers are forced to comply with Privacy Policies and Terms of Service Agreements that err on the side of protecting corporations and not people. In order to mitigate further risk, the world has quickly called for guidelines, frameworks and best practices to be drafted, adopted and implemented.



New Yorkers have a unique history of creating successful systems of accountability. The largest industry in New York is the Financial Services industry, of which the four biggest audit/accounting/assurance firms are still headquartered here in our city. In the early 1970’s, the accounting industry came together to form the Financial Accounting Standards Board (FASB). Meanwhile in London, the industry there formed the precursor organization to today’s International Financial Reporting Standards (IFRS). Once FASB and IFRS established uniform procedures, processes and frameworks, the lawmakers knew they had a system that could be relied upon to deliver oversight, governance and trust. Both in the United States and abroad, these standards were adopted into law less than 24 months after their creation by the Securities and Exchange Commission (SEC) and similar foreign government agencies. Today, accounting is embedded deeply into our capital markets procedures, which has made the responsibleness and trustworthiness of the numbers a foregone conclusion to nearly all who rely upon them.



If independent auditors create a uniform (yet constantly evolving) framework for auditing AI systems, what are some potential outcomes? We’d escape the problem of technology companies not being able to regulate themselves. AI innovators would begin to consider the audit rules around bias, privacy, ethics, trust and cybersecurity, while designing AI systems in the future, and this would lead to more responsibly designed systems. Over time, it would lead to governance and oversight. People would have a way to decide whether an AI system is trustworthy. Please read here to learn more about the Independent Audit of AI Systems framework.

Read More