Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yastack benchmark #8

Open
dragonorloong opened this issue Aug 13, 2019 · 3 comments
Open

yastack benchmark #8

dragonorloong opened this issue Aug 13, 2019 · 3 comments

Comments

@dragonorloong
Copy link

I don't know if there is a problem with my configuration. The performance of yastack is much worse than nginx

Traffic Path:

wrk -> envoy(f-stack) -> nginx
wrk -> nginx(linux kernel) -> nginx

Modify code, always use f-stack socket

diff --git a/ev/source/common/network/address_impl.cc b/ev/source/common/network/address_impl.cc
index a7db10f..96dfc2c 100644
--- a/ev/source/common/network/address_impl.cc
+++ b/ev/source/common/network/address_impl.cc
@@ -194,20 +194,9 @@ int64_t InstanceBase::socketFromSocketType(SocketType socketType) const {
       domain = AF_INET;
     }
     int64_t fd;
-    if (likely(provider_ == Envoy::Network::Address::SocketProvider::Fp)) {
-        // Take over only network sockets
-           // FP non-blocking socket
-        SET_FP_NON_BLOCKING(flags);
-        fd = ff_socket(domain, flags, 0);
-        SET_FP_SOCKET(fd);
-        // TODO: Do we need this?
-        //RELEASE_ASSERT(ff_fcntl(fd, F_SETFL, O_NONBLOCK) != -1, "");
-    } else {
-           // Linux non-blocking socket
-        SET_HOST_NON_BLOCKING(flags);
-        fd = ::socket(domain, flags, 0);
-        RELEASE_ASSERT(fcntl(fd, F_SETFL, O_NONBLOCK) != -1, "");
-    }
+    SET_FP_NON_BLOCKING(flags);
+    fd = ff_socket(domain, flags, 0);
+    SET_FP_SOCKET(fd);
     return fd;
   } else {
     ASSERT(type() == Type::Pipe);

envoy config file:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9995, provider: HOST }

static_resources:
  listeners:
  - name: listener_0
    address:
        socket_address: { address: 0.0.0.0, port_value: 10000, provider: FP}
    filter_chains:
      filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: service_local}
          http_filters:
          - name: envoy.router
  clusters:
  - name: service_local
    connect_timeout: 0.25s
    type: STATIC
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [ { socket_address: { address: 10.182.2.88, port_value: 8090, provider: FP}}]

f-stack config file:

[dpdk]
lcore_mask=1
channel=4
promiscuous=1
numa_on=1
tso=0
vlan_strip=1
port_list=0

[port0]
addr=10.182.2.69
netmask=255.255.252.0
broadcast=10.182.3.255
gateway=10.182.0.1
lcore_list=0

nginx use kernel network stack, config file:

worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;
    keepalive_requests 1000000;
    upstream myupstream {
        server 10.182.2.88:8090;
        keepalive 100;
    }

    server {
        listen       9999 reuseport;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
                proxy_http_version 1.1;
                proxy_set_header Connection "";
                proxy_pass http://myupstream;
        }

test result:

  1. envoy
taskset -c  15-50 wrk -c 100 -d 2m -t20 'http://10.182.2.69:10000/' -H 'Connection: Keep-Alive'                                                                                                                                              
Running 2m test @ http://10.182.2.69:10000/
  20 threads and 100 connections


  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    11.61ms    7.77ms  40.59ms   67.28%
    Req/Sec   436.19     42.19   590.00     70.73%
  1042807 requests in 2.00m, 148.36MB read
Requests/sec:   8683.60
Transfer/sec:      1.24MB
  1. nginx
taskset -c  15-50 wrk -c 100 -d 2m -t30 'http://10.182.2.68:9999/' -H 'Connection: Keep-Alive'                                                                                                                                               
Running 2m test @ http://10.182.2.68:9999/
  30 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.41ms  192.00us  42.99ms   99.36%
    Req/Sec     1.25k    29.55     3.62k    81.31%
  4479077 requests in 2.00m, 627.92MB read
Requests/sec:  37306.43
Transfer/sec:      5.23MB
@oschaaf
Copy link

oschaaf commented Aug 13, 2019

So in the tests I ran while working on https://github.com/envoyproxy/nighthawk the difference between Envoy and nginx wasn't close to being as pronounced as the results above. One thing I notice is that the one test uses -t20 while the other one uses -t30. Is there a reason for that difference?
It may also help to verify that connection-reuse is similar between the two tests.

Having said that, sometimes there's also good reason to sanity check reported numbers. For an example of that involving wrk2, Envoy, and HAProxy, see envoyproxy/envoy#5536 (comment)

@ratnadeepb
Copy link

ratnadeepb commented Aug 30, 2019

I ran comparison tests between YAStack based Envoy and standalone Envoy with the direct response set up. Now YAStack based Envoy runs three threads underneath, out of which I found the eal-intr-thread and the ev-source-exe thread to be vying for CPU time. So I separated out these two tasks to two different cores and found standalone Envoy and YAStack Envoy performance to be exactly the same.

I have been using the https://github.com/rakyll/hey tool for my tests.

My fstack config file looks similar to what @dragonorloong has provided.

Envoy Config file

static_resources:
  listeners:
  - address:
      socket_address: { address: 0.0.0.0, port_value: 8000, provider: FP }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains: ["*"]
              routes:
                      # - match:
                      #     prefix: "/service/1"
                      #   route:
                      #     cluster: service1
              - match:
                  #prefix: "/service/2"
                  prefix: "/" 
                direct_response:
                  status: 200 
                  body:
                    inline_string: <4 KB String>
          http_filters:
          - name: envoy.router
            config: {}
            #  clusters:
            #  - name: service1
            #    connect_timeout: 0.25s
            #    type: strict_dns
            #    lb_policy: round_robin
            #    http2_protocol_options: {}
            #    hosts:
            #    - socket_address:
            #        address: service1
            #            #address: 172.31.9.84
            #        port_value: 8000
admin:
  access_log_path: "/dev/null"
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001
      provider: HOST

@chintan8saaras
Copy link
Collaborator

cc - @dragonorloong @oschaaf @ratnadeepb

My initial tests only did a comparison between vanilla envoy v/s yastack. That had encouraging numbers.

I used wrk for my tests and was using a single-threaded version of yastack. I was interested in per-core throughput, rps, ssl-rps, ssl-throughput etc.

One thing I did notice is that nginx's event collection does not have indirections like libevent. The indirections in libevent have a small cost associated with it. But the benefit is that any other network processing code can integrate with dpdk infused libevent.

One more test I ran was libevent-on-dpdk (without envoy) and those numbers also looked good.

I am a little too held up with something else right now, but plan to revisit this in sometime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants