The Log4j JsonLayout puts the log entry timestamp in a field named "instant" by
default, but the Stackdriver Logging agent does not understand that field. The
logging agent instead uses the time that it received the log entry, which is
less accurate and has only second-level precision.
This commit adds a key-value pair to the JsonLayout pattern that can be
understood by the logging agent. It uses a "time" key as described in
https://cloud.google.com/logging/docs/agent/configuration#timestamp-processing
and formats the timestamp as described in the Protocol Buffer JSON mapping,
https://developers.google.com/protocol-buffers/docs/proto3#json.
Allowing the Stackdriver Logging agent to read the more accurate timestamps
inserted by Log4j is especially important in the adservice, because the logs are
correlated with traces, and it is important to see where each message was logged
on the timeline of the trace.
This PR does a few things:
1. **Removes unnecessary Python dependencies currently being installed for `emailservice`**
There are quite a few packages being installed that aren't actual dependencies.
2. **Removes a number of related, also unnecessary system-level dependencies for `emailservice`**
These were a result of the Python dependencies that are unnecessary.
3. **Pins all of the sub-dependencies for `loadgenerator`**
This is good practice to ensure that things don't break one day in the future when a newer version of an unpinned sub-dependenency is released.
4. **Compile all Python dependencies from `requirements.in` files**
This is mostly bookkeeping. It allows us to only specify the top-level dependencies we care about in the requirements.in files, which are then compiled to frozen dependencies in the requirements.txt files. This ensures that we only install the dependencies we need, and that we're not missing any unpinned sub-dependencies. It also makes it more clear where our sub-dependencies are coming from.
5. **Switch to -slim images from -alpine**
Python's built distribution format (wheel) is incompatible with alpine-based images, causing dependencies like `grpcio` to be compiled from scratch, rather than from a pre-built wheel.
This should improve or possibly fix #58, while keeping the image size roughly the same:
```
emailservice latest d1b818eabe05 6 seconds ago 286MB
loadgenerator latest 4d9b5acbfbbb 6 seconds ago 125MB
```
This is the first service that exports to jaeger. Others to follow.
Requires jaeger to be instantiated using
- helm install --name jaeger stable/jaeger-operator
- kubectl apply -f jaeger.yaml
=== jaeger.yaml Content ===
apiVersion: io.jaegertracing/v1alpha1
kind: Jaeger
metadata:
name: jaeger
Above steps will be added to README in subsequent PR.
Enables tracing in the email and recommendation services, which was disabled in 316db88 because of a memory leak in the stackdriver exporter.
We fixed the leak in https://github.com/googleapis/google-cloud-python/pull/6856. The fix is included in the [0.1.10 release of opencensus-python](https://github.com/census-instrumentation/opencensus-python/releases/tag/v0.1.10).
With this diff, traces show up as expected in stackdriver while running the demo on GKE. Using an `opencensus-python` package version before `0.1.10` causes the email and recommendation services to leak memory until they OOM. Memory use is back to normal (i.e. roughly constant) using the new package version.
This removes hardcoded GCP project name from images and requires an explicit repository flag to skaffold. Also updating the cloudbuild.yaml for staging with the gcr.io/k8s-skaffold/skaffold image.
Fixes#17.
* adservice: Reduced docker image size to ~165MB
(down from ~886MB) by switching to alpine and
using multi stage builds
* adservice: Changed install of glibc in builder to not require untrusted packages
* adservice: Refactored Dockerfile to be a multi stage build. The 'build' step runs from openjdk:8-slim, but the final image is alpine based. We can get away from this since java runs in a vm & the architecture of the images doesn't change between biuld steps
change the log format in Python and Node.js services.
Effected services are currencyservice, emailservice, paymentservice,
and recommendationservice. Loadgenerator is left as is because of
the diffculty to change the log format and log target in locust.
ref. #47
The ad service now returns ads matching the categories of the product that is
currently displayed. Changes in this commit:
- List all products' categories in products.json.
- Pass the current product's categories from the frontend to the ad service when
looking up ads.
- Store a statically initialized multimap from product category to ad in the ad
service.
- Return all ads matching the given categories when handling an ads request.
The ad service continues to return random ads when no categories are given or
no ads match the categories.
This field can be used as the context keys to look up relevant ads in the ad
service.
/cc @rghetia
I also ran the genproto.sh scripts for the Java and Go services and included those changes in the second commit. I encountered an issue when I ran genproto.sh for the recommendation service, and I'm still looking into it.
Upgrading grpc-java fixed an error that I encountered when I tried modifying the adservice to write logs to Stackdriver with google-cloud-logging ("`com.google.cloud.logging.LoggingException: io.grpc.StatusRuntimeException: UNAUTHENTICATED: Credentials require channel with PRIVACY_AND_INTEGRITY security level. Observed security level: NONE`").
Reduce loadgenerator's image size from ~972MB to ~117MB
* Changed loadgen.sh to execute with `/bin/sh` as opposed to `/bin/bash`
* Changed dockerfile to a multi stage build
* Changed base image to `python:3-alpine` as opposed to `python:3.6`
Reduce docker image for emailservice to ~240 MB (down from ~ 1.31 GB)
Main application (`email_server.py`) now runs as python 2.7. Before we had both Python 2.7 and Python 3 installed in the image.
Switched to using `python:2.7-alpine3.8` as the base image, and used multi-stage dockerfiles to keep dependencies minimal.
Fixes#49
From my shell:
```
$ docker build -t emailservice:dev . && docker run -it emailservice:dev
Sending build context to Docker daemon 97.28kB
Step 1/17 : FROM python:2.7-alpine3.8 as base
---> b2bc7255b42c
Step 2/17 : FROM base as builder
---> b2bc7255b42c
Step 3/17 : RUN apk add --update --no-cache gcc linux-headers make musl-dev python-dev g++ cairo-dev cairo openssl-dev gobject-introspection-dev
---> Using cache
---> 6daf3d9fe49a
Step 4/17 : ENV GRPC_PYTHON_VERSION 1.15.0
---> Using cache
---> 3e33d97d9580
Step 5/17 : RUN python -m pip install --upgrade pip
---> Using cache
---> e8fa3879c282
Step 6/17 : RUN pip install grpcio==${GRPC_PYTHON_VERSION} grpcio-tools==${GRPC_PYTHON_VERSION}
---> Using cache
---> c6fba7743eed
Step 7/17 : COPY requirements.txt .
---> Using cache
---> 1f6b0a444980
Step 8/17 : RUN pip install -r requirements.txt
---> Using cache
---> 8cc0a7af6aa8
Step 9/17 : FROM base as final
---> b2bc7255b42c
Step 10/17 : RUN GRPC_HEALTH_PROBE_VERSION=v0.1.0-alpha.1 && wget -qO/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && chmod +x /bin/grpc_health_probe
---> Using cache
---> e954a0384081
Step 11/17 : ENV PYTHONUNBUFFERED=0
---> Using cache
---> 64ece3d72a66
Step 12/17 : WORKDIR /email_server
---> Using cache
---> 27b34dc14492
Step 13/17 : COPY --from=builder /usr/local/lib/python2.7/ /usr/local/lib/python2.7/
---> Using cache
---> 60035ec8dfd4
Step 14/17 : RUN apk add --no-cache libstdc++
---> Using cache
---> 920be90c126e
Step 15/17 : COPY . .
---> Using cache
---> 9541bed2d7a0
Step 16/17 : EXPOSE 8080
---> Using cache
---> 48fbeaa852b9
Step 17/17 : ENTRYPOINT [ "python", "email_server.py" ]
---> Using cache
---> ff317770992d
Successfully built ff317770992d
Successfully tagged emailservice:dev
starting the email service in dummy mode.
listening on port: 8080
```