Bill C-63 – The Online Harms Act In A Nutshell

Four years ago, a group of people from all across Canada was invited to participate in a series of meetings to discover how people felt about online behaviours and how to combat some of the harm that has come out of the internet. It was part of the Citizens Assembly on Democratic Expression. They were to have met in May 2020, but COVID-19 had them meeting virtually rather than in person. Recommendations were sent to Parliament. From the work came parts of Bill C-11. Most of what had been suggested was rejected.

The following year, another cross-Canada group met and discussed all manner of things about the internet and the promotion, or so it seemed, of hateful and/or harmful content. That, too, went to Parliament but was rejected again.

In June of 2022, an amalgamation of the two cross-Canada groups met in Ottawa to hash out better plans for dealing with online hatred and harm. After four days, they handed their suggestions for dealing with trolls, misinformation, and disinformation to Parliament. (More information about the Citizens Assembly can be found at

Now comes Bill C-63, the Online Harms Act.

What is the Online Harms Act? It’s a collection of amendments to the Criminal Code of Canada, the Criminal Records Act, the Corrections and Conditional Release Act, the Youth Criminal Justice system, the Canadian Human Rights Act, and the mandatory reporting of Child Pornography, as they pertain to Canada.

What is harm? According to the bill, it’s “Content that foments hatred of a particular person or a group of people (based on the Canadian Human Rights Act)”, “Content that incites violence, either actively or propounding the threat of violence,” “Content that incites violent extremism or terrorism”, “Content that induces a child (a person under 18 years of age) to harm themselves”, “Content that sexually victimizes a child or re-victimizes a survivor”, “Content used to bully a child”, and/or “Intimate content communicated without consent”.

It suggests that a Digital Safety Officer be appointed by the government, and a Digital Safety Ombudsman (and office) be appointed as well. This group would promote online safety, protect a child’s physical and mental health, mitigate the risk that persons will be exposed to harmful content, enable public discourse without being subject to harmful content, and reduce the harm caused by hateful content. There is to be a staff dedicated to these aims and under the premise that all of the information they receive will be handled in the strictest confidence.

The Digital Safety Officer and his office will set guidelines for social media platforms to follow. If a report is made and a person or persons are investigated, the investigation has become a law enforcement matter and information is not revealed to anyone but law enforcement.

Social media operators must label multiple instances of messages by computer programs if the harmful messages are made more prominent by the use of a computer program.

A representative of the company is to be easily available for the reporting of harmful content.

The social media operators must report child pornography (and other harms). They must make any instances of harmful content inaccessible to Canadians within 24 hours of receiving the complaint.

Submissions are made to the Digital Safety Commission if the harmful content is not removed or the result of the operator’s response to a complaint has not been resolved. The Commission office will investigate the complaints that are submitted and decide if harmful content exists. Non-harmful content that has been hidden by the social media operator may be released to the platform. Social Media operators must hold harmful content and all information in each report for one year. If a court proceeding requires it, the information must be kept until the case is resolved.

In short, Social Media platforms are to be held accountable for harmful content on their sites.

The Digital Safety Commission also weeds out the frivolous, the vengeful, the petty complaints, so that truly harmful content can be dealt with expediently.

So far, this Bill has received its first reading. It’s on the radar of people who seem to think that ‘freedom of speech’ covers all manner of speech, no matter how hurtful or hateful. When this bill passes, we can take back some of our internet and social media from the bots, trolls and limit child endangerment opportunities.

Leave a Reply

Your email address will not be published. Required fields are marked *