Privacy policyContactsAbout siteOpinionsGitHubDonate
© EVILEG 2015-2018
Recommend hosting
TIMEWEB

Django - Tutorial 038. Use BeatifulSoup 4 to clean up the published content from unwanted html tags

BeaftifulSoup4, Django, Python, HTML, Python 3

When developing a web site that adds the ability to write comments or publish articles that allow html layout, the mechanism for clearing unwanted html tags, in particular script and style tags, is important, since malicious scripts on a quality resource definitely should not be present. It will also be good to be able to clean up the style of the text, especially if the resource implies a uniform style. The discordance of screaming fonts is not needed by anyone, and adds problems with the layout.

To implement this mechanism, I use the Python package Beautiful Soup 4 and finally wrote one class, which essentially does everything I need. Removes unnecessary tags, adds necessary classes to tags, saves classes in tags, if you need to leave them during stripping, this is important for classes that are added at the stage of writing a comment, for example, when inserting a YouTube video or adding program code when the user selects which programming language should be represented in the program code block.

Install BeautifulSoup 4

pip install beautifulsoup4

Program code

This example is presented in the form of a class, so that using the inheritance and redefinition of the cleaning method to form the necessary logic, and the program code of the module for cleaning html does not turn into a collection of heterogeneous, inconsistent functions.

# -*- coding: utf-8 -*-

import re

from bs4 import BeautifulSoup
from YourDjangoApp import settings


class ESoup:
    # initialization of the text clear object,
    # can use immediately receive additional tags for deletion so as not to override the class
    def __init__(self, text, tags_for_extracting=()):
        self.soup = BeautifulSoup(text, "lxml") if text else None
        self.tags_for_extracting = ('script', 'style',) + tags_for_extracting

    # Method to remove specified tags
    def __extract_tags(self, soup, tags=()):
        for tag in tags:
            for current_tag in soup.find_all(tag):
                current_tag.extract()
        return soup

    # Method for deleting attributes of all tags
    def __remove_attrs(self, soup):
        for tag in soup.find_all(True):
            tag.attrs = {}
        return soup

    # Method for deleting attributes of all tags except those listed in whitelist_tags
    def __remove_all_attrs_except(self, soup, whitelist_tags=()):
        for tag in soup.find_all(True):
            if tag.name not in whitelist_tags:
                tag.attrs = {}
        return soup

    # Remove all attributes from all tags except those listed in whitelist_tags
    # If the tag is in whitelist_tags, it will only delete those attributes that are not listed in whitelist_attrs
    # Also, this method saves to the tag the classes listed in whitelist_classes
    # regardless of whether it was listed in whitelist_tags or in whitelist_attrs.
    # I just have classes with a special position for tags
    def __remove_all_attrs_except_saving(self, soup, whitelist_tags=(), whitelist_attrs=(), whitelist_classes=()):
        for tag in soup.find_all(True):
            saved_classes = []
            if tag.has_attr('class'):
                classes = tag['class']
                for class_str in whitelist_classes:
                    if class_str in classes:
                        saved_classes.append(class_str)

            if tag.name not in whitelist_tags:
                tag.attrs = {}
            else:
                attrs = dict(tag.attrs)
                for attr in attrs:
                    if attr not in whitelist_attrs:
                        del tag.attrs[attr]

            if len(saved_classes) > 0:
                tag['class'] = ' '.join(saved_classes)

        return soup

    # Adds a nofollow relationship to the tag, checking the url of the src or img attribute
    # If the link leads to the internal pages of your site, then nofollow will not be added.
    def __add_rel_attr(self, soup, tag, attr):
        for tag in soup.find_all(tag):
            attr_content = tag.get(attr)
            if not attr_content.startswith(settings.SITE_URL) and not attr_content.startswith('/'):
                tag['rel'] = ['nofollow']
        return soup

    # Adds new classes to the tag, preserving those classes that already existed.
    def __add_class_attr(self, soup, tag, classes=()):
        for tag in soup.find_all(tag):
            saved_classes = []
            if tag.has_attr('class'):
                saved_classes.append(tag['class'])
            saved_classes.extend(list(classes))
            tag['class'] = ' '.join(saved_classes)
        return soup

    # The method that performs the cleaning, I propose to override it, if you need to change the logic for cleaning up the html code
    def clean(self):
        # if BeautifulSoup was created during initialization, then you can perform the cleanup
        if self.soup:
            # Remove all tags that we don’t like.
            soup = self.__extract_tags(soup=self.soup, tags=self.tags_for_extracting)
            # Remove all attributes from all tags except
            # src and href for tags img and a,
            # and also leave prettyprint class
            soup = self.__remove_all_attrs_except_saving(
                soup=soup,
                whitelist_tags=('img', 'a'),
                whitelist_attrs=('src', 'href',),
                whitelist_classes=('prettyprint',)
            )
            # add rel="nofollow" for external links
            soup = self.__add_rel_attr(soup=soup, tag='a', attr='href')
            soup = self.__add_rel_attr(soup=soup, tag='img', attr='src')
            # improve the appearance of images using the img-fluid class
            soup = self.__add_class_attr(soup=soup, tag='img', classes=('img-fluid',))
            # add the linenums class for pre tags
            soup = self.__add_class_attr(soup=soup, tag='pre', classes=('linenums',))
            # returning useful content, the fact is that BeautifulSoup 4 adds more html and body tags,
            # which I, for example, do not need
            return re.sub('<body>|</body>', '', soup.body.prettify())
        return ''

    # Static class method, something like Shortcut
    @staticmethod
    def clean_text(text, tags_for_extracting=()):
        soup = ESoup(text=text, tags_for_extracting=tags_for_extracting)
        return soup.clean()

Using

So

soup = ESoup(text=text, tags_for_extracting=tags_for_extracting)
soup.clean()

Or so

ESoup.clean_text(text=text, tags_for_extracting=tags_for_extracting)





10% refund of hotel reservation amount on Booking
10% refund of hotel reservation amount on Booking
We offer a link with a 10% return on the amount of the order when booking a hotel through Booking

я думаю, что последний


    @staticmethod
    def clean_text(text, tags_for_extracting=()):
        soup = ESoup(text=text, tags_for_extracting=tags_for_extracting)
        return soup.clean()

есть смысл заменить на classmethod (при наследовании, старый вариант сломается, а с классом - нет):

    @classmethod
    def clean_text(cls, text, tags_for_extracting=()):
        soup = cls(text=text, tags_for_extracting=tags_for_extracting)
        return soup.clean()

Спасибо за информацию, не думал об этом.

Надо будет проверить на кошках.

Comments

Only authorized users can post comments.
Please, Log in or Sign up
ГК
March 20, 2019, 9:01 a.m.
Геннадий Костоянский

C++ - Test 002. Constants

  • Result:0points,
  • Rating points-10
ГК
March 20, 2019, 8:46 a.m.
Геннадий Костоянский

C++ - Test 002. Constants

  • Result:25points,
  • Rating points-10
Last comments
MU
March 20, 2019, 3:43 p.m.
Maciej Urmański

It's possible to simply add vote option for non logged users?
March 20, 2019, 9:45 a.m.
Евгений Легоцкой

Добрый день. Поппробуйте домен localhost, а url соответственно http://localhost Возможно, потребуется указать порт. Например, так http://localhost:8000
March 19, 2019, 12:57 p.m.
AlexanderBardin

Добрый день. А проверить работоспособность локально как-то можно не указывая реальнй сайт (еще в разработке)
March 16, 2019, 1:55 p.m.
Дмитрий

Спасибо за статью. Давно итересует следующий вопрос: с помощью переменных QMAKE_TARGET_COMPANYQMAKE_TARGET_PRODUCTQMAKE_TARGET_DESCRIPTIONможно задать свойства компилируемой программы, о...
JS
March 12, 2019, 10:19 a.m.
Jean Stefanovich

Большое спасибо за разъяснения!
Now discuss on the forum
March 20, 2019, 12:26 p.m.
Евгений Легоцкой

Лучше стараться избегать этого. Нормального механизма нет. Я просто выдёргиваю из бэкенда перевёденные куски шаблона, если нужно что-то задействовать в JS.
March 17, 2019, 10:47 p.m.
Евгений Легоцкой

Добрый день. Вот, нашлось у меня немного времени. Делается это через шаблон проектирования наблюдатель. GraphKS_mfvSlup.zip
ЧГ
March 15, 2019, 9:52 p.m.
Чарльз Грин

спасибо, попробую, отпишусь
m
March 15, 2019, 7:41 p.m.
mihamuz

Сори догадался)
n
March 12, 2019, 4:57 p.m.
newbie.works.with.QT

Большооооое спасибо!!!!!Не передать как я вам благодарен, спасибо что всегда отзываетесь.Теперь я смогу продолжить работу в QT!!! (пробую писать бота (Я как вы могли догадаться немного не пр...
Join us in social networks

For registered users on the site there is a minimum amount of advertising