Digital Citizen

From Green Policy
Jump to navigation Jump to search

Digital Rights Tag Cloud from

Each of us can make a positive difference

Becoming Planet Citizens
Each of us can make a positive difference by stepping up & doing our best -- SJS/GreenPolicy360
Video of Steve Jobs ... Speaking of Apple... People with passion can change the world for the better


Visit ISTE, the International Society for Technology in Education

The conversation around digital citizenship is expanding in EdTech

Educational technology is today's challenge


To empower educators and parents to create a healthy digital culture at school and home.

To refocus conversations around digital citizenship towards practical approaches that help educators and parents support young people in a highly digital world.


Artificial Intelligence Glossary: Neural Networks and Other Terms Explained

The concepts and jargon you need to understand ChatGPT

A list of phrases and concepts useful to understanding artificial intelligence, in particular the new breed of A.I.-enabled chatbots like ChatGPT, Bing and Bard

Bing and Bard chatbots are being rolled out slowly, and you may need to get on their waiting lists for access. ChatGPT currently has no waiting list, but it requires setting up a free account.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.

Bias: A type of error that can occur in a large language model if its output is skewed by the model’s training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.

ChatGPT: ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing: Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard: Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie: The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

Emergent behavior: Unexpected or unintended abilities in a large language model, enabled by the model’s learning patterns and rules from its training data. For example, models that are trained on programming and coding sites can write new code. Other examples include creative abilities like composing poetry, music and fictional stories.

Generative A.I.: Technology that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.

Hallucination: A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.

Large language model: A type of neural network that learns skills — including generating prose, conducting conversations and writing computer code — by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.

Natural language processing: Techniques used by large language models to understand and generate human language, including text classification and sentiment analysis. These methods often use a combination of machine learning algorithms, statistical models and linguistic rules.

Neural network: A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks don’t always understand what happens in between.

Parameters: Numerical values that define a large language model’s structure and behavior, like clues that help it guess what words come next. Systems like GPT-4 are thought to have hundreds of billions of parameters.

Reinforcement learning: A technique that teaches an A.I. model to find the best result by trial and error, receiving rewards or punishments from an algorithm based on its results. This system can be enhanced by humans giving feedback on its performance, in the form of ratings, corrections and suggestions.

Transformer model: A neural network architecture useful for understanding language that does not have to analyze words one at a time but can look at an entire sentence at once. This was an A.I. breakthrough, because it enabled models to understand context and long-term dependencies in language. Transformers use a technique called self-attention, which allows the model to focus on the particular words that are important in understanding the meaning of a sentence.

A Look into the 'Libertarian' Telegram

Via Wired

February 2022

Read the full story about the origins of Telegram

(article excerpts)

For years now, the world has fretted over Facebook's—now Meta's—seemingly inexorable dominance: its relentless neutralization of competitors either by acquisition or elimination; its subjugation of politics, culture, and every facet of intimate life to the priorities of an algorithm built for ad sales; its succession of escalating privacy scandals; and its record of disingenuous apologies when it gets caught. But over the past year or so, Mark Zuckerberg's empire has begun to look a little less invulnerable. Lawmakers have increasingly arrayed against it, and at brief moments—like the January 2021 mass exodus from WhatsApp, and a second one that followed a Facebook outage in October—the powerful network effects that drive Meta's supremacy have seemed to shift briefly into reverse. Somehow Telegram, with its tiny staff, has become one of the greatest beneficiaries of those stumbles.

Whether this is a good thing for the world is another question, one muddied by how poorly understood Telegram is, especially in the US. The vast majority of journalists still refer to it as an “encrypted messaging app.” This description unnerves many security experts, who warn that, unlike Signal or WhatsApp, Telegram is not end-to-end encrypted by default; that users must go out of their way to turn on the app's “secret chats” function (which few people actually do); and that only individual conversations, not those among groups, can be end-to-end encrypted. For the millions of people who use Telegram under repressive regimes, experts say, that confusion could be costly.

But the term “messaging app” is itself somewhat misleading, in ways that lead many to underestimate Telegram. Over the years, the app has become a deliberate hybrid of a messaging service and a social media platform—a rival not only to WhatsApp and Signal but also, increasingly, to Facebook itself. Users can join public or private channels with unlimited numbers of followers, where anyone can like, share, or comment. They can also join private groups with up to 200,000 members—a scale that dwarfs WhatsApp's 256-member limit. But unlike Facebook, at Telegram there is no targeted advertising and no algorithmic feed...

It's been vital to pro-democracy protesters from Belarus to Hong Kong, but the global right seems to find Telegram particularly congenial...

In interviews, (Telegram's Russian founder) Durov would depict Telegram as a distributed company, free of any one country's jurisdiction and security apparatus—and, above all, beyond the grip of Putin's Russia. He portrayed himself to the Times as an “exile,” a depiction that would go on to reappear in countless press accounts. The paper described him as a “nomad, moving from country to country every few weeks with a small band of computer programmers.” Durov's Instagram feed seemed to bear this out, with snapshots of glamorous hotels and landmarks in the places he stayed—in Beverly Hills, Paris, London, Rome, Venice, Bali, Helsinki...

In 2015 alone, Telegram's small team created a platform for users to create and publish their own chatbots; they added reply, mention, and hashtag functions to group chats; they added in-app video playback and a new photo editor; and, for the first time, they introduced public channels for those wanting to broadcast to an unlimited number of followers. Only Facebook, with its much larger staff, was adding features at a comparable rate...

In the world of social media, Telegram is a distinct oddity. Often rounding out lists of the world's 10 largest platforms, it has just around 30 core employees, had no source of ongoing revenue until very recently, and—in an era when tech firms face increasing pressure to quash hate speech and misinformation—exercises virtually no content moderation, except to take down illegal pornography and calls for violence. At Telegram it is an article of faith, and a marketing pitch, that the company's platform should be available to all, regardless of politics or ideology. “For us, Telegram is an idea,” Pavel Durov, Telegram's Russian founder, has said. “It is the idea that everyone on this planet has a right to be free.”

NewsGuard targets Disinformation / Misinformation: Check out the NewsGuard online services

Manipulating Online: Turning social media and database marketing into political persuasion

An inside look at the U.S. 2020 presidential campaign - "The Great Hack"

The Great Hack.jpg


The Problems Beyond Fake News

Political Marketing, Oppo Politics, Disinfo, Data Manipulation

"What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?"

Worse because of our ever-expanding computational prowess; worse because of ongoing advancements in artificial intelligence and machine learning that can blur the lines between fact and fiction; worse because those things could usher in a future where anyone could make it “appear as if anything has happened, regardless of whether or not it did.”


Huge following.png


Digital Citizenship

Social Media
Graphic: Social Media World circa 2016

Conversation prism.jpeg

An Internet Snapshot - March 2018

Digital Rights

Disinformation - Online - Dangerous

Fact Checking

Fact Checking and Embedded Links


Internet of Things and Intelligent Energy Efficiency

Mobile Internet

Online Privacy

Privacy on the Net-Online Rights

Strategic Policy-Internet Online Rights

Virtual Private Network

Wikipedia, Wikimedia, MediaWiki, and wiki

World Wide Web

Internet Bill of Rights .png


Alliance for an Affordable Internet -

American Principles Project -

Atlantic -



BSA/The Software Alliance -


Daily Dot -

Digital Bill of Rights -

Digital Rights at Wikipedia -


Economist -

Esquire -


Fast Company Labs -

Fox -

Freedom Online Coalition -


Ghostery -

Global Network Initiative -

Principles on Freedom of Expression and Privacy -

Guardian -


Hackers News Bulletin -

Hunton Privacy Blog -


Inside Counsel -

Internet Governance Project -

Internet Privacy at Wikipedia -


LinkIs – You Leave a Trail w/ Everything You Do Online - (online video)


New America Foundation/Kevin Bankston -

NPR/On Point -


Online Magna Carta -

Online Publishers Association -

Open Access Overview -

Open Architecture Network -

Open Internet Tools Project

Open Rights Group -

Open Technology Institute -


Pando -


PC World -

Pew Institute -

ProPublica -


Reader Supported News -

Reform Government Surveillance -


Save the Net -

SearchNet Networking -

Section 215/"Patriot Act" -,_Title_II

Section 702/"FISA" -

Social Concept Consulting -


TechDirt -

The Hill -

ThinkProgress -


UN / The Right to Privacy in the Digital Age-June 2014 -

UN / United Nations Promotion and protection of human rights and fundamental freedoms while countering terrorism -


VentureBeat -

Vox -

Web We Want -

Wim Says – If You’re Not Paying for It, then You’re the Product -

Wired - How to Save the Net -

Wired -