Previously, I had been cleaning out data using the code snippet below
import unicodedata, re, io
all_chars = (unichr(i) for i in xrange(0x110000))
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) == 'C')
cc_re = re.compile('[%s]' % re.escape(control_chars))
def rm_control_chars(s): # see http://www.unicode.org/reports/tr44/#General_Category_Values
return cc_re.sub('', s)
cleanfile = 
with io.open('filename.txt', 'r', encoding='utf8') as fin:
for line in fin:
There are newline characters in the file that i want to keep.
The following records the time taken for
cc_re.sub('', s) to substitute the first few lines (1st column is the time taken and 2nd column is
As @ashwinichaudhary suggested, using
s.translate(dict.fromkeys(control_chars)) and the same time taken log outputs:
But the code is really slow for my 1GB of text. Is there any other way to clean out controlled characters?
found a solution working character by charater, I bench marked it using a 100K file:
import unicodedata, re, io from time import time # This is to generate randomly a file to test the script from string import lowercase from random import random all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = [c for c in all_chars if unicodedata.category(c) == 'C'] chars = (list(u'%s' % lowercase) * 115117) + control_chars fnam = 'filename.txt' out=io.open(fnam, 'w') for line in range(1000000): out.write(u''.join(chars[int(random()*len(chars))] for _ in range(600)) + u'\n') out.close() # version proposed by alvas all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = ''.join(c for c in all_chars if unicodedata.category(c) == 'C') cc_re = re.compile('[%s]' % re.escape(control_chars)) def rm_control_chars(s): return cc_re.sub('', s) t0 = time() cleanfile =  with io.open(fnam, 'r', encoding='utf8') as fin: for line in fin: line =rm_control_chars(line) cleanfile.append(line) out=io.open(fnam + '_out1.txt', 'w') out.write(''.join(cleanfile)) out.close() print time() - t0 # using a set and checking character by character all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = set(c for c in all_chars if unicodedata.category(c) == 'C') def rm_control_chars_1(s): return ''.join(c for c in s if not c in control_chars) t0 = time() cleanfile =  with io.open(fnam, 'r', encoding='utf8') as fin: for line in fin: line = rm_control_chars_1(line) cleanfile.append(line) out=io.open(fnam + '_out2.txt', 'w') out.write(''.join(cleanfile)) out.close() print time() - t0
the output is:
I tried on a file of 1Gb (only for the second one) and it lasted 186s.
I also wrote this other version of the same script, slightly faster (176s), and more memory efficient (for very large files not fitting in RAM):
t0 = time() out=io.open(fnam + '_out5.txt', 'w') with io.open(fnam, 'r', encoding='utf8') as fin: for line in fin: out.write(rm_control_chars_1(line)) out.close() print time() - t0
As in UTF-8, all control characters are coded in 1 byte (compatible with ASCII) and bellow 32, I suggest this fast piece of code:
#!/usr/bin/python import sys ctrl_chars = [x for x in range(0, 32) if x not in (ord("\r"), ord("\n"), ord("\t"))] filename = sys.argv with open(filename, 'rb') as f1: with open(filename + '.txt', 'wb') as f2: b = f1.read(1) while b != '': if ord(b) not in ctrl_chars: f2.write(b) b = f1.read(1)
Is it ok enough?
Does this have to be in python? How about cleaning the file before you read it in python to start with. Use sed which will treat it line by line anyway.
See removing control characters using sed.
and if you pipe it out to another file you can open that. I don't know how fast it would be though. You can do it in a shell script and test it. according to this page - sed is 82M characters per second.
Hope it helps.
If you want it to move really fast? Break your input into multiple chunks, wrap up that data munging code as a method, and use Python's
multiprocessing package to parallelize it, writing to some common text file. Going character-by-character is the easiest method to crunch stuff like this, but it always takes a while.
I'm surprised no one has mentioned mmap which might just be the right fit here.
Note: I'll put this in as an answer in case it's useful and apologize that I don't have the time to actually test and compare it right now.
You load the file into memory (kind of) and then you can actually run a
re.sub() over the object. This helps eliminate the IO bottleneck and allows you to change the bytes in-place before writing it back at once.
After this, then, you can experiment with str.translate() vs re.sub() and also include any further optimisations like double buffering CPU and IO or using multiple CPU cores/threads.
But it'll look something like this;
import mmap f = open('test.out', 'r') m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
A nice excerpt from the mmap documentation is;
..You can use mmap objects in most places where strings are expected; for example, you can use the re module to search through a memory-mapped file. Since they’re mutable, you can change a single character by doing obj[index] = 'a',..
A couple of things I would try.
First, do the substitution with a replace all regex.
Second, setup a regex char class with known control char ranges instead
of a class of individual control char's.
(This is incase the engine doesn't optimize it to ranges.
A range requires two conditionals on the assembly level,
as opposed to individual conditional on each char in the class)
Third, since you are removing the characters, add a greedy quantifier
after the class. This will negate the necessity to enter into substitution
subroutines after each single char match, instead grabbing all adjacent chars
I don't know pythons syntax for regex constructs off the top of my head,
nor all the control codes in Unicode, but the result would look something
The largest amount of time would be in copying the results to another string.
The smallest amount of time would be in finding all the control codes, which
would be miniscule.
All things being equal, the regex (as described above) is the fastest way to go.