在django里,流式响应streaminghttpresponse是个好东西,可以快速、节省内存地产生一个大型文件。
目前项目里用于流式响应的一个是eventsource,用于改善跨系统通讯时用户产生的慢速的感觉。这个不细说了。
还有一个就是生成一个大的csv文件。
当django进程处于gunicorn或者uwsgi等web容器中时,如果响应超过一定时间没有返回,就会被web容器终止掉,虽然我们可以通过加长web容器的超时时间来绕过这个问题,但是毕竟还是治标不治本。要根本上解决这个问题,python的生成器、django框架提供的streaminghttpresponse这个流式响应很有帮助
而在csv中,中文的处理也至关重要,要保证用excel打开csv不乱码什么的。。为了节约空间,我就把所有代码贴到一起了。。实际使用按照项目的规划放置哈
上代码:
from __future__ import absolute_import
import csv
import codecs
import cstringio
class echo(object):
def write(self, value):
return value
class unicodewriter:
“””
a csv writer which will write rows to csv file “f”,
which is encoded in the given encoding.
“””
def __init__(self, f, dialect=csv.excel, encoding=”utf-8″, **kwds):
# redirect output to a queue
self.queue = cstringio.stringio()
self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
self.stream = f
self.encoder = codecs.getincrementalencoder(encoding)()
def writerow(self, row):
self.writer.writerow([handle_column(s) for s in row])
# fetch utf-8 output from the queue …
data = self.queue.getvalue()
data = data.decode(“utf-8”)
# … and reencode it into the target encoding
data = self.encoder.encode(data)
# write to the target stream
value = self.stream.write(data)
# empty queue
self.queue.truncate(0)
return value
def writerows(self, rows):
for row in rows:
self.writerow(row)
from django.views.generic import view
from django.http.response import streaminghttpresponse
class exampleview(view):
headers=[‘一些’,’表头’]
def get(self,request):
result = [[‘第一行’,’数据1′],
[‘第二行’,’数据2′]]
echoer = echo()
writer = unicodewriter(echoer)
def csv_itertor():
yield codecs.bom_utf8
yield writer.writerow(self.headers)
for column in result:
yield writer.writerow(column)
response = streaminghttpresponse(
(row for row in csv_itertor()),
content_type=”text/csv;charset=utf-8″)
response[‘content-disposition’
] = ‘attachment;filename=”example.csv”‘
return response