ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

Flask 运行性能调优

2020-02-24 20:01:56  阅读:484  来源: 互联网

标签:transaction MB Flask 性能 transactions secs 调优 sec time


文章目录

概述

目前使用的平台在使用的过程中发现性能比较低,所以需要想办法进行性能调优。

使用的工具

Siege是一个http负载测试和基准测试工具。 它旨在让网络开发者在胁迫下测量他们的代码,看看它将如何站起来加载到互联网上。 Siege支持基本认证,cookies,HTTP,HTTPS和FTP协议。 它允许用户使用可配置数量的模拟客户端访问服务器。 这些客户将服务器置于“under siege”。

说白了Siege 就是一个多线程的http服务器压力测试工具,官网在这里,最近版本3.1.4。怎么安装可以在官网上查看。这个官网好像已经很长时间没更新了,我在mac下安装的siege已经到了4.0.4版本。Mac下直接使用brew安装就可以了。

brew install siege

siege
SIEGE 4.0.4
Usage: siege [options]
       siege [options] URL
       siege -g URL
Options:
  -V, --version             VERSION, prints the version number.
  -h, --help                HELP, prints this section.
  -C, --config              CONFIGURATION, show the current config.
  -v, --verbose             VERBOSE, prints notification to screen.
  -q, --quiet               QUIET turns verbose off and suppresses output.
  -g, --get                 GET, pull down HTTP headers and display the
                            transaction. Great for application debugging.
  -p, --print               PRINT, like GET only it prints the entire page.
  -c, --concurrent=NUM      CONCURRENT users, default is 10
  -r, --reps=NUM            REPS, number of times to run the test.
  -t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
  -d, --delay=NUM           Time DELAY, random delay before each requst
  -b, --benchmark           BENCHMARK: no delays between requests.
  -i, --internet            INTERNET user simulation, hits URLs randomly.
  -f, --file=FILE           FILE, select a specific URLS FILE.
  -R, --rc=FILE             RC, specify an siegerc file
  -l, --log[=FILE]          LOG to FILE. If FILE is not specified, the
                            default is used: PREFIX/var/siege.log
  -m, --mark="text"         MARK, mark the log file with a string.
                            between .001 and NUM. (NOT COUNTED IN STATS)
  -H, --header="text"       Add a header to request (can be many)
  -A, --user-agent="text"   Sets User-Agent in request
  -T, --content-type="text" Sets Content-Type in request
      --no-parser           NO PARSER, turn off the HTML page parser
      --no-follow           NO FOLLOW, do not follow HTTP redirects

Copyright (C) 2017 by Jeffrey Fulmer, et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.

直接给出几个常用命令,具体命令行每个参数的意义可以看参考1

# get 请求
siege -c 1000 -r 100 -b url
# post 请求
siege -c 1000 -r 100 -b url POST {\"accountId\":\"123\",\"platform\":\"ios\"}"

测试

测试代码

看一下文件树结构,tree

➜  flask tree
.
├── hello1.py
├── hello1.pyc
├── hello2.py
├── hello2.pyc
├── hello3.py
└── templates
    └── hello.html

下面是一段没有使用模板,只返回字符串的Flask代码。

# file hello1.py
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, World!'

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

下面是一段使用模板文件的Flask代码。

# file hello2.py
from flask import Flask,render_template

app = Flask(__name__)

@app.route('/hello/')
@app.route('/hello/<name>')
def hello(name=None):
    return render_template('hello.html', name=name)

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

hello.html文件

<!doctype html>
<title>Hello from Flask</title>
{% if name %}
  <h1>Hello {{ name }}!</h1>
{% else %}
  <h1>Hello, World!</h1>
{% endif %}

flask 直接运行

首先看hello1.py的测试结果

# 100 并发
siege -c 100 -r 10 -b http://127.0.0.1:5000

Transactions:		        1000 hits
Availability:		      100.00 %
Elapsed time:		        1.17 secs
Data transferred:	        0.01 MB
Response time:		        0.11 secs
Transaction rate:	      854.70 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		       92.12
Successful transactions:        1000
Failed transactions:	           0
Longest transaction:	        0.14
Shortest transaction:	        0.01

# 200并发
# siege -c 200 -r 10 -b http://127.0.0.1:5000

Transactions:		        1789 hits
Availability:		       89.45 %
Elapsed time:		        2.26 secs
Data transferred:	        0.02 MB
Response time:		        0.17 secs
Transaction rate:	      791.59 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		      134.37
Successful transactions:        1789
Failed transactions:	         211
Longest transaction:	        2.09
Shortest transaction:	        0.00

# 1000 并发
siege -c 1000 -r 10 -b http://127.0.0.1:5000

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       16.29 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      613.87 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        2.13
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.08
Shortest transaction:	        0.00

不知道为什么200的时候可用率会有一个下降,但是从大趋势可以看出来,访问速率是一直再降的,1000并发的时候已经到613/s了。

在看看第二段代码


# 100 并发
siege -c 100 -r 10 -b http://127.0.0.1:5000/hello/libai

Transactions:		        1000 hits
Availability:		      100.00 %
Elapsed time:		        1.26 secs
Data transferred:	        0.07 MB
Response time:		        0.12 secs
Transaction rate:	      793.65 trans/sec
Throughput:		        0.06 MB/sec
Concurrency:		       93.97
Successful transactions:        1000
Failed transactions:	           0
Longest transaction:	        0.14
Shortest transaction:	        0.04

# 200并发
siege -c 200 -r 10 -b http://127.0.0.1:5000/hello/libai
Transactions:		        1837 hits
Availability:		       91.85 %
Elapsed time:		        2.52 secs
Data transferred:	        0.13 MB
Response time:		        0.18 secs
Transaction rate:	      728.97 trans/sec
Throughput:		        0.05 MB/sec
Concurrency:		      134.77
Successful transactions:        1837
Failed transactions:	         163
Longest transaction:	        2.18
Shortest transaction:	        0.00

# 1000 并发
siege -c 1000 -r 10 -b http://127.0.0.1:5000/hello/libai
Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       17.22 secs
Data transferred:	        0.70 MB
Response time:		        0.01 secs
Transaction rate:	      580.72 trans/sec
Throughput:		        0.04 MB/sec
Concurrency:		        7.51
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.09
Shortest transaction:	        0.00

其他方式

参考Flask官方文档推荐的部署方式进行测试。

>虽然轻便且易于使用,但是 Flask 的内建服务器不适用于生产 ,它也不能很好 的扩展。本文主要说明在生产环境下正确使用 Flask 的一些方法。
如果想要把 Flask 应用部署到这里没有列出的 WSGI 服务器,请查询其文档中关于 如何使用 WSGI 的部分,只要记住: Flask 应用对象实质上是一个 WSGI 应用。

下面从官方的方式中挑选几种进行性能测试。

Gunicorn

Gunicorn ‘Green Unicorn’ 是一个 UNIX 下的 WSGI HTTP 服务器,它是一个 移植自 Ruby 的 Unicorn 项目的 pre-fork worker 模型。它既支持 eventlet , 也支持 greenlet 。在 Gunicorn 上运行 Flask 应用非常简单:

gunicorn myproject:app

当然,为了使用gunicorn,我们首先得pip install gunicorn来进行gunicorn的安装。要使用gunicorn启动hello1.py,需要将里面的代码

app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

删掉。然后执行命令

# 其中 -w 为开启n个进程 -b 为绑定ip和端口
gunicorn hello1:app -w 4 -b 127.0.0.1:4000

gunicorn 默认使用同步阻塞的网络模型(-k sync),对于大并发的访问可能表现不够好, 它还支持其它更好的模式,比如:gevent或meinheld。所以,我们可以将阻塞模型替换为gevent。

# 其中 -w 为开启n个进程 -b 为绑定ip和端口 -k 为替换阻塞模型为gevent
gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent

下面我分别测试1000并发10次访问的四种情况,1个进程、4个进程下gevnent和非gevent模型,看看结果。

在测试前,一定要设置ulimit的值大一些,否者会报Too many open files错误,我设置到了65535
ulimit 65535

gunicorn hello1:app -w 1 -b 127.0.0.1:4000
siege -c 1000 -r 10 -b http://127.0.0.1:4000
Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.21 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      657.46 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        0.85
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

可以看到,单进程比flask直接启动稍稍好一点。

gunicorn hello1:app -w 4 -b 127.0.0.1:4000
siege -c 1000 -r 10 -b http://127.0.0.1:4000

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.19 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      658.33 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        0.75
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

# 使用gevent,记得 pip install gevent
gunicorn hello1:app -w 1 -b 127.0.0.1:4000  -k gevent
Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.20 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      657.89 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        1.33
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.02
Shortest transaction:	        0.00

gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		       15.51 secs
Data transferred:	        0.12 MB
Response time:		        0.00 secs
Transaction rate:	      644.75 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        1.06
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.28
Shortest transaction:	        0.00

可以看到,在并发数为1000的时候,使用gunicorn和genent并不明显,但是当我们修改并发数为100或200是进行测试

gunicorn hello1:app -w 1 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000
Transactions:		        1991 hits
Availability:		       99.55 %
Elapsed time:		        1.62 secs
Data transferred:	        0.02 MB
Response time:		        0.14 secs
Transaction rate:	     1229.01 trans/sec
Throughput:		        0.02 MB/sec
Concurrency:		      167.71
Successful transactions:        1991
Failed transactions:	           9
Longest transaction:	        0.34
Shortest transaction:	        0.00

gunicorn hello1:app -w 4 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000
Transactions:		        2000 hits
Availability:		      100.00 %
Elapsed time:		        0.71 secs
Data transferred:	        0.02 MB
Response time:		        0.04 secs
Transaction rate:	     2816.90 trans/sec
Throughput:		        0.03 MB/sec
Concurrency:		      122.51
Successful transactions:        2000
Failed transactions:	           0
Longest transaction:	        0.17
Shortest transaction:	        0.00

可以看到在4进程,使用gevent的时候已经达到了2816。

再测试一下200并发下hello2.py的效率。

gunicorn hello2:app -w 1 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2
Transactions:		        1998 hits
Availability:		       99.90 %
Elapsed time:		        1.72 secs
Data transferred:	        0.13 MB
Response time:		        0.14 secs
Transaction rate:	     1161.63 trans/sec
Throughput:		        0.08 MB/sec
Concurrency:		      168.12
Successful transactions:        1998
Failed transactions:	           2
Longest transaction:	        0.35
Shortest transaction:	        0.00

gunicorn hello2:app -w 4 -b 127.0.0.1:4000  -k gevent
siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2
Transactions:		        2000 hits
Availability:		      100.00 %
Elapsed time:		        0.71 secs
Data transferred:	        0.13 MB
Response time:		        0.05 secs
Transaction rate:	     2816.90 trans/sec
Throughput:		        0.19 MB/sec
Concurrency:		      128.59
Successful transactions:        2000
Failed transactions:	           0
Longest transaction:	        0.14
Shortest transaction:	        0.0

可以看到和hello1.py的效率差不都,也达到了2800+,性能基本上算是提升了4倍。

uWSGI

官网链接uWSGI,安装请点开链接查看。Mac下直接使用brew install uWSGI就可以安装,安装好后,在网站目录下运行

uWSGI --http 127.0.0.1:4000 --module hello1:app

时间不够了,先写到这里

usWSGI和ngnix

uswgi 安装,使用pip install uswgi就可以了。

写配置文件uswgi.ini,这个文件uswgi的配置文件。

[uwsgi]
# 是否启用主进程
master = true
# 虚拟python环境的目录,即virtualenv生成虚拟环境的目录
home = venv
# wsgi启动文件
wsgi-file = manage.py
# wsgi启动文件中new出的app对象
callable = app
# 绑定端口
socket = 0.0.0.0:5000
# 开启几个进程
processes = 4
# 每个进程几个线程
threads = 2
# 允许的缓冲大小
buffer-size = 32768
# 接受的协议,这里要注意!!!!!!直接使用uwsgi启动时必须有这项,没有的话会造成服务可以启动,但是浏览器不能访问;在只是用nginx进行代理访问时,这项必须删除,否则nginx不能正常代理到uwsgi服务。
protocol=http

其中uwsgi的启动文件为manage.py,其中hello1为上面的hello1.py,注释掉其中的app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)

from flask import Flask
from hello1 import app

manager = Manager(app)

if __name__ == '__main__':
    manager.run()

然后使用命令uswgi uswgi.ini启动程序,访问本地127.0.0.1:5000就可以看到helloworld了。然后就需要和nginx一起使用了,安装好nginx后,找到nginx的配置文件,如果使用的是apt或者yum安装nginx,则nginx的配置文件在/etc/nginx/nginx.conf中,为了不影响全局效果,这里修改/etc/nginx/sites-available/default文件,这个文件在/etc/nginx/nginx.conf中包含了,所以配置也是生效的。配置文件内容。

# nginx ip 访问次数限制,具体内容请查看参考6,7
limit_req_zone $binary_remote_addr zone=allips:100m rate=50r/s;  

server {
	listen 80 default_server;
	listen [::]:80 default_server;
	# nginx ip 访问次数限制,具体内容请查看参考6,7
	limit_req   zone=allips  burst=20  nodelay; 
	root /var/www/html;
	# Add index.php to the list if you are using PHP
	index index.html index.htm index.nginx-debian.html;
	server_name _;
	# 静态文件代理,nginx的静态文件访问速度比其他容器快很多。
	location /themes/  {
		alias       /home/dc/CTFd_M/CTFd/themes/;
	}
	# uwsgi配置
	location / {
		include uwsgi_params;
		uwsgi_pass 127.0.0.1:5000; 
		# python virtualenv 路径
		uwsgi_param UWSGI_PYHOME /home/dc/CTFd_M/venv; 
		# 当前项目路径
		uwsgi_param UWSGI_CHDIR /home/dc/CTFd_M; 
		# 启动文件
		uwsgi_param UWSGI_SCRIPT manage:app; 
		# 超时
		uwsgi_read_timeout 100;
	}
}

然后启动nginx服务器,访问127.0.0.1就可以正常访问了,由于可能本机配置有问题,不能成功使用这种方式进行系统的访问,后面的对比结果是我新建虚拟机,Ubuntu Server 16.04,2核,2G内存的性能,并且这里访问的网页已经不是前面的hello1.py这种测试程序,而是一个完成的应用平台,可以从Throughput属性看到,已经达到了20+M/s的处理速度。

# 下面的两个测试均是物理机上访问虚机环境,虚机环境为Ubuntu Server 16.04
# 使用uswgi启动
siege -c 200 -r 10 -b http://192.168.2.151:5000/index.html
Transactions:		       56681 hits
Availability:		       99.90 %
Elapsed time:		      163.48 secs
Data transferred:	     3385.71 MB
Response time:		        0.52 secs
Transaction rate:	      346.72 trans/sec
Throughput:		       20.71 MB/sec
Concurrency:		      180.97
Successful transactions:       56681
Failed transactions:	          59
Longest transaction:	       32.23
Shortest transaction:	        0.05

# 使用uswsgi和nginx做静态代理后
siege -c 200 -r 10 -b http://192.168.2.151/index.html

Transactions:		       53708 hits
Availability:		       99.73 %
Elapsed time:		      122.13 secs
Data transferred:	     3195.15 MB
Response time:		        0.29 secs
Transaction rate:	      439.76 trans/sec
Throughput:		       26.16 MB/sec
Concurrency:		      127.83
Successful transactions:       53708
Failed transactions:	         148
Longest transaction:	      103.07
Shortest transaction:	        0.00

可以看到,uswsgi和nginx一起使用,能够提升一些效率,从346次/s提升到了439次/s。

参考

  1. siege压力测试工具安装和介绍
  2. Flask官方文档
  3. 用gunicorn+gevent启动Flask项目
  4. CGI, FastCGI, WSGI, uWSGI, uwsgi简述
  5. Flask+uwsgi+Nginx部署应用
  6. nginx限制ip请求次数 以及并发次数
  7. Nginx限制访问次数和并发数
DarkN0te 发布了16 篇原创文章 · 获赞 0 · 访问量 235 私信 关注

标签:transaction,MB,Flask,性能,transactions,secs,调优,sec,time
来源: https://blog.csdn.net/m0_46232048/article/details/104483728

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有