Apache2 + Varnish 5 + phpmyadmin causa falha de segmentação (11) - ERR_EMPTY_RESPONSE

4

Eu configurei tudo o que o livro de verniz recomenda para um servidor web (Debian 8 (jessie), 8 GB de memória, 100 GB SSD, Mysql 5.7, PHP 70.13, Apache2, opcache (256M)). O verniz usa 256 MB de RAM, o Apache não tem restrições em quantos trabalhadores devem ser usados porque isso não mudou nada. Além disso, o PHP para o apache usa 1024MB como limite máximo de memória.

Existem 6 Websites em execução, 2 deles são monitorados pelo Wordpress Jetpack. Um subdomínio é reservado para o phpmyadmin. Tudo corre bem depois de reiniciar o Varnish e o Apache2. Mas por algum motivo, o verniz não pode acessar o site e retorna uma Meditação Guru. Quando eu acesso o Apache diretamente pela porta: 8080 eu recebo um ERR_EMPTY_RESPONSE . o apache error.log diz: [core: notice] [pid 31160] AH00052: sinal de saída pid 8773 de criança Falha de segmentação (11)

O arquivo Core Dump diz:

[New LWP 26426]
Core was generated by '/usr/sbin/apache2 -k start'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f87f5fdc0b8 in ?? ()

bracktrace gbd full diz:

#0  0x00007fcf01cd1b82 in do_fcntl (fd=76, cmd=7, arg=0x7fcf02114e80     <proc_mutex_lock_it>)
      at ../sysdeps/unix/sysv/linux/fcntl.c:39
        resultvar = 18446744073709551104
#1  0x00007fcf01cd1c59 in __libc_fcntl (fd=<optimized out>, cmd=cmd@entry=7)
    at ../sysdeps/unix/sysv/linux/fcntl.c:88
        ap = {{gp_offset = 16, fp_offset = 1196201076, overflow_arg_area =     0x7ffc54089050,
            reg_save_area = 0x7ffc54089000}}
        arg = <optimized out>
        oldtype = <optimized out>
        result = <optimized out>
#2  0x00007fcf01efb326 in proc_mutex_fcntl_acquire (mutex=0x7fcf02890340)
    at /tmp/buildd/apr-1.5.1/locks/unix/proc_mutex.c:579
        rc = <optimized out>
#3  0x00007fcefb0c05fd in accept_mutex_on () at prefork.c:232
        rv = -512
#4  child_main (child_num_arg=76) at prefork.c:611
        current_conn = 0xfffffffffffffe00
        csd = 0x7fcf0288b0a0
        thd = 0x7fcf0288d0a0
        osthd = 140527079356288
        ptrans = 0x7fcf0288b028
        allocator = 0x7fcf03712580
        i = 1409847472
        pollset = 0x7fcf0288d488
        sbh = 0x7fcf0288d480
        lockfile = 0x7fcf02a73898 <ap_listeners> "@73
vcl 4.0;
import std;
import directors;

# Default backend definition. Set this to point to your content server.
backend server1  {
    .host = "127.0.0.1";
    .port = "8080";
    .probe = {
        .request =
          "HEAD / HTTP/1.1"
          "Host: www.mydomain.com"
          "Connection: close"
          "User-Agent: Varnish Health Probe";

          .interval  = 5s; # check the health of each backend every 5 seconds
          .timeout   = 1s; # timing out after 1 second.
          .window    = 5;  # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick
          .threshold = 3;
    }
    .max_connections = 200;
    .first_byte_timeout     = 300s;   # How long to wait before we receive a first byte from our backend?
    .connect_timeout        = 5s;     # How long to wait for a backend connection?
    .between_bytes_timeout  = 2s;     # How long to wait between bytes received from our backend?
}

acl purge {
  # ACL we'll use later to allow purges
  "localhost";
  "127.0.0.1";
  "::1";
}

sub vcl_init {
  # Called when VCL is loaded, before any requests pass through it.
  # Typically used to initialize VMODs.

  new vdir = directors.round_robin();
  vdir.add_backend(server1);
}

sub vcl_recv {

    # Called at the beginning of a request, after the complete request has been received and parsed.
    # Its purpose is to decide whether or not to serve the request, how to do it, and, if applicable,
    # which backend to use.
    # also used to modify the request

    #19.8 Solution: Rewrite URL and Host Header Fields
    set req.http.x-host = req.http.host;
    set req.http.x-url = req.url;

    # Allow purging
    if (req.method == "PURGE") {
        if (!client.ip ~ purge) { # purge is the ACL defined at the begining
          # Not from an allowed IP? Then die with an error.
          return (synth(405, "This IP is not allowed to send PURGE requests."));
        }

        # If you got this stage (and didn't error out above), purge the cached result
        return (purge);
    }

    if (req.method == "BAN") {
            # Same ACL check as above:
            if (!client.ip ~ purge) {
                    return(synth(403, "Not allowed."));
            }
            ban("req.http.host == " + req.http.host +
                  " && req.url == " + req.url);

            # Throw a synthetic page so the
            # request won't go to the backend.
            return(synth(200, "Ban added"));
    }

      # Only deal with "normal" types
      if (req.method != "GET" &&
          req.method != "HEAD" &&
          req.method != "PUT" &&
          req.method != "POST" &&
          req.method != "TRACE" &&
          req.method != "OPTIONS" &&
          req.method != "PATCH" &&
          req.method != "DELETE") {
        /* Non-RFC2616 or CONNECT which is weird. */
        return (pipe);
      }


    # Only SESSIONID and PHPSESSID are left in req.http.cookie at this point.

    # Some generic URL manipulation, useful for all templates that follow
    # First remove the Google Analytics added parameters, useless for our backend
    if (req.url ~ "(\?|&)(utm_source|utm_medium|utm_campaign|utm_content|gclid|cx|ie|cof|siteurl)=") {
    set req.url = regsuball(req.url, "&(utm_source|utm_medium|utm_campaign|utm_content|gclid|cx|ie|cof|siteurl)=([A-z0-9_\-\.%25]+)", "");
    set req.url = regsuball(req.url, "\?(utm_source|utm_medium|utm_campaign|utm_content|gclid|cx|ie|cof|siteurl)=([A-z0-9_\-\.%25]+)", "?");
    set req.url = regsub(req.url, "\?&", "?");
    set req.url = regsub(req.url, "\?$", "");
    }

    # Large static files are delivered directly to the end-user without
    # waiting for Varnish to fully read the file first.
    # Varnish 4 fully supports Streaming, so set do_stream in vcl_backend_response()
    if (req.url ~ "^[^?]*\.(7z|html|css|js|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
      unset req.http.Cookie;
      return (hash);
    }

      # Send Surrogate-Capability headers to announce ESI support to backend
      set req.http.Surrogate-Capability = "key=ESI/1.0";

      if (req.http.Authorization) {
        # Not cacheable by default
        return (pass);
      }

    ################## ################## ################## ###########
    ################## PASS BACKEND LOGINS #############################
    ################## ################## ################## ###########

    if (
        req.url ~ "^/phpmyadmin" ||
        req.url ~ "^/admin/" ||
        req.url ~ "/wp-(login|admin)" ||
        req.url ~ "^/typo3" ||
        req.method == "POST"
         ) {
      return(pass);
    }

    return(hash);

}


sub vcl_pipe {
  # Called upon entering pipe mode.
  # In this mode, the request is passed on to the backend, and any further data from both the client
  # and backend is passed on unaltered until either end closes the connection. Basically, Varnish will
  # degrade into a simple TCP proxy, shuffling bytes back and forth. For a connection in pipe mode,
  # no other VCL subroutine will ever get called after vcl_pipe.

  # Note that only the first request to the backend will have
  # X-Forwarded-For set.  If you use X-Forwarded-For and want to
  # have it set for all requests, make sure to have:
  # set bereq.http.connection = "close";
  # here.  It is not set by default as it might break some broken web
  # applications, like IIS with NTLM authentication.

  # Implementing websocket support
  if (req.http.upgrade) {
    set bereq.http.upgrade = req.http.upgrade;
  }

  return (pipe);
}

# The data on which the hashing will take place
sub vcl_hash {
  # Called after vcl_recv to create a hash value for the request. This is used as a key
  # to look up the object in Varnish.

  hash_data(req.url);

  if (req.http.host) {
    hash_data(req.http.host);
  } else {
    hash_data(server.ip);
  }

  # hash cookies for requests that have them
  if (req.http.Cookie) {
    hash_data(req.http.Cookie);
  }
}


sub vcl_hit {
  # Called when a cache lookup is successful.

  if (obj.ttl >= 0s) {
    # A pure unadultered hit, deliver it
    return (deliver);
  }

  # When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically.
  # If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content.

  # We have no fresh fish. Lets look at the stale ones.
  if (std.healthy(req.backend_hint)) {
    # Backend is healthy. Limit age to 10s.
    if (obj.ttl + 10s > 0s) {
      #set req.http.grace = "normal(limited)";
      return (deliver);
    } else {
      # No candidate for grace. Fetch a fresh object.
      return(deliver);
    }
  } else {
    # backend is sick - use full grace
      if (obj.ttl + obj.grace > 0s) {
      #set req.http.grace = "full";
      return (deliver);
    } else {
      # no graced object.
      return (deliver);
    }
  }

  # fetch & deliver once we get the result
  return (deliver); # Dead code, keep as a safeguard
}

sub vcl_miss {
  # Called after a cache lookup if the requested document was not found in the cache. Its purpose
  # is to decide whether or not to attempt to retrieve the document from the backend, and which
  # backend to use.

  return (fetch);
}


# Handle the HTTP request coming from our backend
sub vcl_backend_response {
  # Called after the response headers has been successfully retrieved from the backend.

  # Pause ESI request and remove Surrogate-Control header
  if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
    unset beresp.http.Surrogate-Control;
    set beresp.do_esi = true;
  }

  #  Set TTL to whatever x-max-age tells us or 120s
  set beresp.ttl = std.duration(beresp.http.x-max-age + "s", 120s);

  #An HTTP 1.0 server might send the header Pragma: nocache. Varnish ignores this header. You could easily add support for this header in VCL.

  if (beresp.http.Pragma ~ "nocache") {
    set beresp.uncacheable = true;
    set beresp.ttl = 120s; # how long not to cache this url.
  }

  # Enable cache for all static files
  # The same argument as the static caches from above: monitor your cache size, if you get data nuked out of it, consider giving up the static file cache.
  # Before you blindly enable this, have a read here: https://ma.ttias.be/stop-caching-static-files/
  if (bereq.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|otf|ogg|ogm|opus|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
    unset beresp.http.set-cookie;
  }

  # Large static files are delivered directly to the end-user without
  # waiting for Varnish to fully read the file first.
  # Varnish 4 fully supports Streaming, so use streaming here to avoid locking.
  if (bereq.url ~ "^[^?]*\.(7z|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
    unset beresp.http.set-cookie;
    set beresp.do_stream = true;  # Check memory usage it'll grow in fetch_chunksize blocks (128k by default) if the backend doesn't send a Content-Length header, so only enable it for big objects
  }

  # Sometimes, a 301 or 302 redirect formed via Apache's mod_rewrite can mess with the HTTP port that is being passed along.
  # This often happens with simple rewrite rules in a scenario where Varnish runs on :80 and Apache on :8080 on the same box.
  # A redirect can then often redirect the end-user to a URL on :8080, where it should be :80.
  # This may need finetuning on your setup.
  #
  # To prevent accidental replace, we only filter the 301/302 redirects for now.
  if (beresp.status == 301 || beresp.status == 302) {
    set beresp.http.Location = regsub(beresp.http.Location, ":[0-9]+", "");
  }

  # Set 2min cache if unset for static files
  if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") {
    set beresp.ttl = 120s; # Important, you shouldn't rely on this, SET YOUR HEADERS in the backend
    set beresp.uncacheable = true;
    return (deliver);
  }

  # Don't cache 50x responses
  if (beresp.status == 500 || beresp.status == 502 || beresp.status == 503 || beresp.status == 504) {
    return (abandon);
  }

  # Allow stale content, in case the backend goes down.
  # make Varnish keep all objects for 6 hours beyond their TTL
  set beresp.grace = 6h;

  return (deliver);
}


# The routine when we deliver the HTTP request to the user
# Last chance to modify headers that are sent to the client
sub vcl_deliver {
  # Called before a cached object is delivered to the client.

  #19.11 Solution: Modify the HTTP response header fields
  set resp.http.X-Age = resp.http.Age;
  unset resp.http.Age;

  if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS and the number of hits, disable when not needed
    set resp.http.X-Cache = "HIT";
  } else {
    set resp.http.X-Cache = "MISS";
  }

  # Please note that obj.hits behaviour changed in 4.0, now it counts per objecthead, not per object
  # and obj.hits may not be reset in some cases where bans are in use. See bug 1492 for details.
  # So take hits with a grain of salt
  set resp.http.X-Cache-Hits = obj.hits;

  # Remove some headers: PHP version
  unset resp.http.X-Powered-By;

  # Remove some headers: Apache version & OS
  unset resp.http.Server;
  unset resp.http.X-Drupal-Cache;
  #unset resp.http.X-Varnish;
  unset resp.http.Via;
  unset resp.http.Link;
  unset resp.http.X-Generator;

  return (deliver);
}

sub vcl_purge {
  # Only handle actual PURGE HTTP methods, everything else is discarded
  if (req.method != "PURGE") {
    # restart request
    set req.http.X-Purge = "Yes";
    return(restart);
  }
}

sub vcl_synth {
## handle redirecting from http to https
  if (resp.status == 750) {
    set resp.status = 301;
    set resp.http.Location = req.http.x-redir;
    return(deliver);
  }

  if (resp.status == 720) {
    # We use this special error status 720 to force redirects with 301 (permanent) redirects
    # To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
    set resp.http.Location = resp.reason;
    set resp.status = 301;
    return (deliver);
  } elseif (resp.status == 721) {
    # And we use error status 721 to force redirects with a 302 (temporary) redirect
    # To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
    set resp.http.Location = resp.reason;
    set resp.status = 302;
    return (deliver);
  }

    if (resp.status == 401) {
        set resp.http.WWW-Authenticate = "Basic";
    }

  return (deliver);
}


sub vcl_fini {
  # Called when VCL is discarded only after all requests have exited the VCL.
  # Typically used to clean up VMODs.

  return (ok);
}

/* Customize error responses */
sub vcl_backend_error {
    if (beresp.status == 503){
        set beresp.status = 200;
        synthetic( {"
        <html><body>I will be there for you again, soon. I promise</body></html>
        "} );
        return (deliver);
    }
}
277" #5 0x00007fcefb0c0a01 in make_child (s=0x7fcf02a29de0, slot=4) at prefork.c:800 No locals. #6 0x00007fcefb0c113b in prefork_run (_pconf=0x7fcf02a72f38 <ap_server_conf>, plog=0x7ffc5408917c, s=0x7ffc54089180) at prefork.c:1051 status = 0 pid = {pid = 6020, in = 0x7fcf01ef3036 <find_entry+134>, out = 0x0, err = 0x7fcf0284d720} child_slot = 4 exitwhy = APR_PROC_EXIT processed_status = 0 rv = -512 #7 0x00007fcf02811e7e in ap_run_mpm (pconf=0x7fcf02a61028, plog=0x7fcf02a2f028, s=0x7fcf02a29de0) at mpm_common.c:94 pHook = <optimized out> n = 0 rv = -1 #8 0x00007fcf0280b3c3 in main (argc=3, argv=0x7ffc54089468) at main.c:777 c = 0 '
[New LWP 26426]
Core was generated by '/usr/sbin/apache2 -k start'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f87f5fdc0b8 in ?? ()
0' error = 0xfffffffffffffe00 <error: Cannot access memory at address 0xfffffffffffffe00> process = 0x7fcf02a63118 pconf = 0x7fcf02a61028 plog = 0x7fcf02a2f028 ptemp = 0x7fcf02a2b028 pcommands = 0x7fcf02a39028 opt = 0x7fcf02a39118 mod = 0x7fcf02a6f1c0 <ap_prelinked_modules+64> opt_arg = 0x7fcf02a63028 "(p6
#0  0x00007fcf01cd1b82 in do_fcntl (fd=76, cmd=7, arg=0x7fcf02114e80     <proc_mutex_lock_it>)
      at ../sysdeps/unix/sysv/linux/fcntl.c:39
        resultvar = 18446744073709551104
#1  0x00007fcf01cd1c59 in __libc_fcntl (fd=<optimized out>, cmd=cmd@entry=7)
    at ../sysdeps/unix/sysv/linux/fcntl.c:88
        ap = {{gp_offset = 16, fp_offset = 1196201076, overflow_arg_area =     0x7ffc54089050,
            reg_save_area = 0x7ffc54089000}}
        arg = <optimized out>
        oldtype = <optimized out>
        result = <optimized out>
#2  0x00007fcf01efb326 in proc_mutex_fcntl_acquire (mutex=0x7fcf02890340)
    at /tmp/buildd/apr-1.5.1/locks/unix/proc_mutex.c:579
        rc = <optimized out>
#3  0x00007fcefb0c05fd in accept_mutex_on () at prefork.c:232
        rv = -512
#4  child_main (child_num_arg=76) at prefork.c:611
        current_conn = 0xfffffffffffffe00
        csd = 0x7fcf0288b0a0
        thd = 0x7fcf0288d0a0
        osthd = 140527079356288
        ptrans = 0x7fcf0288b028
        allocator = 0x7fcf03712580
        i = 1409847472
        pollset = 0x7fcf0288d488
        sbh = 0x7fcf0288d480
        lockfile = 0x7fcf02a73898 <ap_listeners> "@73
vcl 4.0;
import std;
import directors;

# Default backend definition. Set this to point to your content server.
backend server1  {
    .host = "127.0.0.1";
    .port = "8080";
    .probe = {
        .request =
          "HEAD / HTTP/1.1"
          "Host: www.mydomain.com"
          "Connection: close"
          "User-Agent: Varnish Health Probe";

          .interval  = 5s; # check the health of each backend every 5 seconds
          .timeout   = 1s; # timing out after 1 second.
          .window    = 5;  # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick
          .threshold = 3;
    }
    .max_connections = 200;
    .first_byte_timeout     = 300s;   # How long to wait before we receive a first byte from our backend?
    .connect_timeout        = 5s;     # How long to wait for a backend connection?
    .between_bytes_timeout  = 2s;     # How long to wait between bytes received from our backend?
}

acl purge {
  # ACL we'll use later to allow purges
  "localhost";
  "127.0.0.1";
  "::1";
}

sub vcl_init {
  # Called when VCL is loaded, before any requests pass through it.
  # Typically used to initialize VMODs.

  new vdir = directors.round_robin();
  vdir.add_backend(server1);
}

sub vcl_recv {

    # Called at the beginning of a request, after the complete request has been received and parsed.
    # Its purpose is to decide whether or not to serve the request, how to do it, and, if applicable,
    # which backend to use.
    # also used to modify the request

    #19.8 Solution: Rewrite URL and Host Header Fields
    set req.http.x-host = req.http.host;
    set req.http.x-url = req.url;

    # Allow purging
    if (req.method == "PURGE") {
        if (!client.ip ~ purge) { # purge is the ACL defined at the begining
          # Not from an allowed IP? Then die with an error.
          return (synth(405, "This IP is not allowed to send PURGE requests."));
        }

        # If you got this stage (and didn't error out above), purge the cached result
        return (purge);
    }

    if (req.method == "BAN") {
            # Same ACL check as above:
            if (!client.ip ~ purge) {
                    return(synth(403, "Not allowed."));
            }
            ban("req.http.host == " + req.http.host +
                  " && req.url == " + req.url);

            # Throw a synthetic page so the
            # request won't go to the backend.
            return(synth(200, "Ban added"));
    }

      # Only deal with "normal" types
      if (req.method != "GET" &&
          req.method != "HEAD" &&
          req.method != "PUT" &&
          req.method != "POST" &&
          req.method != "TRACE" &&
          req.method != "OPTIONS" &&
          req.method != "PATCH" &&
          req.method != "DELETE") {
        /* Non-RFC2616 or CONNECT which is weird. */
        return (pipe);
      }


    # Only SESSIONID and PHPSESSID are left in req.http.cookie at this point.

    # Some generic URL manipulation, useful for all templates that follow
    # First remove the Google Analytics added parameters, useless for our backend
    if (req.url ~ "(\?|&)(utm_source|utm_medium|utm_campaign|utm_content|gclid|cx|ie|cof|siteurl)=") {
    set req.url = regsuball(req.url, "&(utm_source|utm_medium|utm_campaign|utm_content|gclid|cx|ie|cof|siteurl)=([A-z0-9_\-\.%25]+)", "");
    set req.url = regsuball(req.url, "\?(utm_source|utm_medium|utm_campaign|utm_content|gclid|cx|ie|cof|siteurl)=([A-z0-9_\-\.%25]+)", "?");
    set req.url = regsub(req.url, "\?&", "?");
    set req.url = regsub(req.url, "\?$", "");
    }

    # Large static files are delivered directly to the end-user without
    # waiting for Varnish to fully read the file first.
    # Varnish 4 fully supports Streaming, so set do_stream in vcl_backend_response()
    if (req.url ~ "^[^?]*\.(7z|html|css|js|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
      unset req.http.Cookie;
      return (hash);
    }

      # Send Surrogate-Capability headers to announce ESI support to backend
      set req.http.Surrogate-Capability = "key=ESI/1.0";

      if (req.http.Authorization) {
        # Not cacheable by default
        return (pass);
      }

    ################## ################## ################## ###########
    ################## PASS BACKEND LOGINS #############################
    ################## ################## ################## ###########

    if (
        req.url ~ "^/phpmyadmin" ||
        req.url ~ "^/admin/" ||
        req.url ~ "/wp-(login|admin)" ||
        req.url ~ "^/typo3" ||
        req.method == "POST"
         ) {
      return(pass);
    }

    return(hash);

}


sub vcl_pipe {
  # Called upon entering pipe mode.
  # In this mode, the request is passed on to the backend, and any further data from both the client
  # and backend is passed on unaltered until either end closes the connection. Basically, Varnish will
  # degrade into a simple TCP proxy, shuffling bytes back and forth. For a connection in pipe mode,
  # no other VCL subroutine will ever get called after vcl_pipe.

  # Note that only the first request to the backend will have
  # X-Forwarded-For set.  If you use X-Forwarded-For and want to
  # have it set for all requests, make sure to have:
  # set bereq.http.connection = "close";
  # here.  It is not set by default as it might break some broken web
  # applications, like IIS with NTLM authentication.

  # Implementing websocket support
  if (req.http.upgrade) {
    set bereq.http.upgrade = req.http.upgrade;
  }

  return (pipe);
}

# The data on which the hashing will take place
sub vcl_hash {
  # Called after vcl_recv to create a hash value for the request. This is used as a key
  # to look up the object in Varnish.

  hash_data(req.url);

  if (req.http.host) {
    hash_data(req.http.host);
  } else {
    hash_data(server.ip);
  }

  # hash cookies for requests that have them
  if (req.http.Cookie) {
    hash_data(req.http.Cookie);
  }
}


sub vcl_hit {
  # Called when a cache lookup is successful.

  if (obj.ttl >= 0s) {
    # A pure unadultered hit, deliver it
    return (deliver);
  }

  # When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically.
  # If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content.

  # We have no fresh fish. Lets look at the stale ones.
  if (std.healthy(req.backend_hint)) {
    # Backend is healthy. Limit age to 10s.
    if (obj.ttl + 10s > 0s) {
      #set req.http.grace = "normal(limited)";
      return (deliver);
    } else {
      # No candidate for grace. Fetch a fresh object.
      return(deliver);
    }
  } else {
    # backend is sick - use full grace
      if (obj.ttl + obj.grace > 0s) {
      #set req.http.grace = "full";
      return (deliver);
    } else {
      # no graced object.
      return (deliver);
    }
  }

  # fetch & deliver once we get the result
  return (deliver); # Dead code, keep as a safeguard
}

sub vcl_miss {
  # Called after a cache lookup if the requested document was not found in the cache. Its purpose
  # is to decide whether or not to attempt to retrieve the document from the backend, and which
  # backend to use.

  return (fetch);
}


# Handle the HTTP request coming from our backend
sub vcl_backend_response {
  # Called after the response headers has been successfully retrieved from the backend.

  # Pause ESI request and remove Surrogate-Control header
  if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
    unset beresp.http.Surrogate-Control;
    set beresp.do_esi = true;
  }

  #  Set TTL to whatever x-max-age tells us or 120s
  set beresp.ttl = std.duration(beresp.http.x-max-age + "s", 120s);

  #An HTTP 1.0 server might send the header Pragma: nocache. Varnish ignores this header. You could easily add support for this header in VCL.

  if (beresp.http.Pragma ~ "nocache") {
    set beresp.uncacheable = true;
    set beresp.ttl = 120s; # how long not to cache this url.
  }

  # Enable cache for all static files
  # The same argument as the static caches from above: monitor your cache size, if you get data nuked out of it, consider giving up the static file cache.
  # Before you blindly enable this, have a read here: https://ma.ttias.be/stop-caching-static-files/
  if (bereq.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|otf|ogg|ogm|opus|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
    unset beresp.http.set-cookie;
  }

  # Large static files are delivered directly to the end-user without
  # waiting for Varnish to fully read the file first.
  # Varnish 4 fully supports Streaming, so use streaming here to avoid locking.
  if (bereq.url ~ "^[^?]*\.(7z|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
    unset beresp.http.set-cookie;
    set beresp.do_stream = true;  # Check memory usage it'll grow in fetch_chunksize blocks (128k by default) if the backend doesn't send a Content-Length header, so only enable it for big objects
  }

  # Sometimes, a 301 or 302 redirect formed via Apache's mod_rewrite can mess with the HTTP port that is being passed along.
  # This often happens with simple rewrite rules in a scenario where Varnish runs on :80 and Apache on :8080 on the same box.
  # A redirect can then often redirect the end-user to a URL on :8080, where it should be :80.
  # This may need finetuning on your setup.
  #
  # To prevent accidental replace, we only filter the 301/302 redirects for now.
  if (beresp.status == 301 || beresp.status == 302) {
    set beresp.http.Location = regsub(beresp.http.Location, ":[0-9]+", "");
  }

  # Set 2min cache if unset for static files
  if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") {
    set beresp.ttl = 120s; # Important, you shouldn't rely on this, SET YOUR HEADERS in the backend
    set beresp.uncacheable = true;
    return (deliver);
  }

  # Don't cache 50x responses
  if (beresp.status == 500 || beresp.status == 502 || beresp.status == 503 || beresp.status == 504) {
    return (abandon);
  }

  # Allow stale content, in case the backend goes down.
  # make Varnish keep all objects for 6 hours beyond their TTL
  set beresp.grace = 6h;

  return (deliver);
}


# The routine when we deliver the HTTP request to the user
# Last chance to modify headers that are sent to the client
sub vcl_deliver {
  # Called before a cached object is delivered to the client.

  #19.11 Solution: Modify the HTTP response header fields
  set resp.http.X-Age = resp.http.Age;
  unset resp.http.Age;

  if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS and the number of hits, disable when not needed
    set resp.http.X-Cache = "HIT";
  } else {
    set resp.http.X-Cache = "MISS";
  }

  # Please note that obj.hits behaviour changed in 4.0, now it counts per objecthead, not per object
  # and obj.hits may not be reset in some cases where bans are in use. See bug 1492 for details.
  # So take hits with a grain of salt
  set resp.http.X-Cache-Hits = obj.hits;

  # Remove some headers: PHP version
  unset resp.http.X-Powered-By;

  # Remove some headers: Apache version & OS
  unset resp.http.Server;
  unset resp.http.X-Drupal-Cache;
  #unset resp.http.X-Varnish;
  unset resp.http.Via;
  unset resp.http.Link;
  unset resp.http.X-Generator;

  return (deliver);
}

sub vcl_purge {
  # Only handle actual PURGE HTTP methods, everything else is discarded
  if (req.method != "PURGE") {
    # restart request
    set req.http.X-Purge = "Yes";
    return(restart);
  }
}

sub vcl_synth {
## handle redirecting from http to https
  if (resp.status == 750) {
    set resp.status = 301;
    set resp.http.Location = req.http.x-redir;
    return(deliver);
  }

  if (resp.status == 720) {
    # We use this special error status 720 to force redirects with 301 (permanent) redirects
    # To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
    set resp.http.Location = resp.reason;
    set resp.status = 301;
    return (deliver);
  } elseif (resp.status == 721) {
    # And we use error status 721 to force redirects with a 302 (temporary) redirect
    # To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
    set resp.http.Location = resp.reason;
    set resp.status = 302;
    return (deliver);
  }

    if (resp.status == 401) {
        set resp.http.WWW-Authenticate = "Basic";
    }

  return (deliver);
}


sub vcl_fini {
  # Called when VCL is discarded only after all requests have exited the VCL.
  # Typically used to clean up VMODs.

  return (ok);
}

/* Customize error responses */
sub vcl_backend_error {
    if (beresp.status == 503){
        set beresp.status = 200;
        synthetic( {"
        <html><body>I will be there for you again, soon. I promise</body></html>
        "} );
        return (deliver);
    }
}
277" #5 0x00007fcefb0c0a01 in make_child (s=0x7fcf02a29de0, slot=4) at prefork.c:800 No locals. #6 0x00007fcefb0c113b in prefork_run (_pconf=0x7fcf02a72f38 <ap_server_conf>, plog=0x7ffc5408917c, s=0x7ffc54089180) at prefork.c:1051 status = 0 pid = {pid = 6020, in = 0x7fcf01ef3036 <find_entry+134>, out = 0x0, err = 0x7fcf0284d720} child_slot = 4 exitwhy = APR_PROC_EXIT processed_status = 0 rv = -512 #7 0x00007fcf02811e7e in ap_run_mpm (pconf=0x7fcf02a61028, plog=0x7fcf02a2f028, s=0x7fcf02a29de0) at mpm_common.c:94 pHook = <optimized out> n = 0 rv = -1 #8 0x00007fcf0280b3c3 in main (argc=3, argv=0x7ffc54089468) at main.c:777 c = 0 '%pre%0' error = 0xfffffffffffffe00 <error: Cannot access memory at address 0xfffffffffffffe00> process = 0x7fcf02a63118 pconf = 0x7fcf02a61028 plog = 0x7fcf02a2f028 ptemp = 0x7fcf02a2b028 pcommands = 0x7fcf02a39028 opt = 0x7fcf02a39118 mod = 0x7fcf02a6f1c0 <ap_prelinked_modules+64> opt_arg = 0x7fcf02a63028 "(p6%pre%277" signal_server = 0xfffffffffffffe00
277" signal_server = 0xfffffffffffffe00

Eu não tenho ideia do porque desse erro aparecer como posso reproduzi-lo. Mas obviamente depois de algumas horas acontece. De repente.. Obrigado por qualquer ajuda!

Aqui está o default.vcl:

%pre%

O

    
por Steve 28.11.2016 / 13:47

1 resposta

2

Resolvido.

A solução é assim:

Depuração

Primeiro eu corrigi todos os problemas do MySQL usando o script mysqltuner.pl. Então eu adicionei as seguintes opções para /etc/mysql/mysql.conf.d/mysqld.cnf :

innodb_buffer_pool_size = 1G
max_heap_table_size=128M
tmp_table_size=128M
key_buffer_size=56M
innodb_log_file_size=256M
innodb_buffer_pool_instances=1
join_buffer_size = 4M
table_open_cache = 820199
max_connections = 100

Isso consertou alguns vazamentos de memória. Mas o problema real não foi corrigido. Então tive que depurar os processos PIDs / Child do apache. Primeiro reduzi as instâncias do meu apache2 em ** vi /etc/apache2/apache2.conf **:

<IfModule mpm_prefork_module>
    StartServers 1
    MinSpareServers 1
    MaxSpareServers 1
    MaxClients 30
    ServerLimit 50
    MaxClients 12
    MaxRequestsPerChild  1000
</IfModule>

Em seguida, reinicie o apache2

 /etc/init.d/apache2 stop && /etc/init.d/apache2 start

Para encontrar o PID do apache (pelo usuário www-data) use

atop -m

Use o primeiro PID por www-data e observe o PID Digite

gdb

Quando o depurador está em execução

attach [your PID number]

Para ver um tipo de log de erros contínuo

c

O resultado deve ser assim:

Loaded symbols for /lib/x86_64-linux-gnu/libnss_dns.so.2
0x00007fa48ef1eb82 in do_fcntl (fd=60, cmd=7, arg=0x7fa48f361e80 <proc_mutex_lock_it>)
at ../sysdeps/unix/sysv/linux/fcntl.c:39
39      ../sysdeps/unix/sysv/linux/fcntl.c: No such file or directory.
(gdb) c
Continuing.

Atualize seu site e, quando o erro ocorrer novamente, isso mostrará um erro.

Resolvendo

No meu caso, foi /usr/lib/php/20151012/opcache.so . Eu amo este módulo, mas ele quebrou meu apache porque não foi capaz de alocar memória. Então eu fui para

 vi /etc/php/7.0/apache2/php.ini

e desabilitar opcache

opcache.enable=0

Esta solução é um pouco triste. Mas como uso o Varnish como servidor de cache, os problemas de desempenho não importam muito.

    
por 01.12.2016 / 09:22