Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP - gRPC Plugin framework #1214

Closed

Conversation

olivierboucher
Copy link

Which problem is this PR solving?

Short description of the changes

  • Plugin based storage backend

@yurishkuro
Copy link
Member

Great start. My main comment is about avoiding duplicating the trace model.

"github.com/jaegertracing/jaeger/pkg/grpc/config"
)

const pluginBinary = "grpc-plugin.binary"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should refer to storage since we may want to use grpc plugins for other needs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would you propose? I'm afraid to come up with something thats too long

.editorconfig Outdated
@@ -0,0 +1,4 @@
# Override for Makefile
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't include this. You can add it to your global gitignore.

plugin.Serve(&plugin.ServeConfig{
HandshakeConfig: shared.Handshake,
VersionedPlugins: map[int]plugin.PluginSet{
18: map[string]plugin.Plugin{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is 18?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Meant for 1.8 (current version). Should we use something else?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand the semantics here. Is this the version of the plugin API?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sourcegraph helped me find some docs here:

VersionedPlugins is a map of PluginSets for specific protocol versions. These can be used to negotiate a compatible version between client and server. If this is set, Handshake.ProtocolVersion is not required.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the version of the plugin API. I changed it to 1

glide.yaml Outdated
@@ -76,3 +76,4 @@ import:
- package: golang.org/x/sys
subpackages:
- unix
- package: github.com/hashicorp/go-plugin
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a major version?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No there is none

VersionedPlugins: map[int]plugin.PluginSet{
18: shared.PluginMap,
},
Cmd: exec.Command("sh", "-c", configuration.PluginBinary),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need sh?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not, good point

@@ -0,0 +1,90 @@
syntax = "proto3";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this copied just to remove gogoproto options?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it was

"github.com/jaegertracing/jaeger/plugin/storage/grpc/proto"
)

func DependencyLinkSliceFromProto(p []*proto.DependencyLink) []model.DependencyLink {
Copy link
Member

@yurishkuro yurishkuro Nov 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to avoid the need for this file. One of the main reasons for generating domain model from proto IDL was to be able to reuse those types in the grpc plugin interfaces

@yurishkuro yurishkuro changed the title WIP - Plugin framework WIP - gRPC Plugin framework Nov 27, 2018
@chvck
Copy link
Contributor

chvck commented Dec 4, 2018

Hi @olivierboucher, we're keen to use the work that you have started here, thanks! Do you need any assistance in pushing it to a point where it can be considered complete?

@olivierboucher
Copy link
Author

@chvck I have been a bit busy lately but I still plan on finishing this. I might need some help with the proto generation in order to use gogoproto and remove the mapping code

@olivierboucher
Copy link
Author

@yurishkuro

I am trying to add support for configuration and I may need advice.

I am doing the following to read from the viper context:

const pluginArgsPrefix = "grpc-storage-plugin.arg."

func (opt *Options) InitFromViper(v *viper.Viper) {
	opt.Configuration.PluginBinary = v.GetString(pluginBinary)

	for _, f := range v.AllKeys() {
		if strings.HasPrefix(f, pluginArgsPrefix) {
			opt.Configuration.PluginArgs = append(opt.Configuration.PluginArgs, v.GetString(f))
		}
	}
}

However I am confused as what to add in the AddFlags method since the flags are dynamic. Should I do nothing? Not sure how to document this.

My goal is for people to use the args like this:

grpc-storage-plugin.arg.plugin-specific-arg=1

and have plugin-specific-arg=1 passed to their plugin.

Not sure if this conversation should be moved to an issue either.

Thanks

@yurishkuro
Copy link
Member

@olivierboucher making CLI flags dynamic is pretty complicated. The way we did this for the main SPAN_STORAGE_TYPE is that even though this flag is accepted on the command line, it is actually parsed manually (not via library), then used to instantiate specific storage factory, which is then responsible for registering the actual flags. Only after that the command line is parsed by the flags library.

I think the same process will be pretty complicated to reproduce for grpc-plugin. There is a way, but I think it's way more complicated than it is worth. Do you have any concerns with only doing a config-file approach? Then the only CLI flag we need to define is the path to the config file. The parsing of the config file can be still done via Viper inside the plugin main(), this way some config params can still be overwritten with environment variables.

@olivierboucher
Copy link
Author

@yurishkuro

Makes sense ! I just committed the implementation.

I think it's time to benchmark and write tests against the new code. I'm not familiar with code benchmarking, any method you would suggest? Is there existing code somewhere ?

Thanks

@isaachier
Copy link
Contributor

This is where #1221 might come in. If you can figure out crossdock, try adding an integration test. It is definitely tricky so I'll see if I have some free time to help. Benchmarking can be as simple as measuring the latency in requests and perhaps throughput. Not sure how formal the process needs to be.

@yurishkuro
Copy link
Member

Well, crossdock is more for correctness integration testing, we never used it for performance.

For benchmarking storage I don't think the integration tests are important. We just need a data generator pushing data into SpanWriter interface, one running in-process, another running via grpc-plugin.

@yurishkuro
Copy link
Member

We have a trace generator in the internal repo that I used to stress test the agents back in the days. Let me see if I can move it to oss.

@olivierboucher
Copy link
Author

I know it's not top quality data but I re-used the createTrace method from the integration tests and called it in a loop. I compared memory vs grpc-memory-plugin and it's about the same numbers.

Keep me posted regarding that trace generator

@yurishkuro
Copy link
Member

Wow, same numbers? That makes me suspicious of the methodology. Did you really stress the system? The dual serialization of grpc version alone should show some difference

@isaachier
Copy link
Contributor

Did you use the standard Go benchmarking from the testing package?

@yurishkuro
Copy link
Member

trace generator also creates fairly simple traces, it's purpose is to generate load of on agents or collectors.

#1245

@olivierboucher
Copy link
Author

I did not, what I did is probably meaningless. I will manage to checkout the tracegen and run proper tests

@olivierboucher
Copy link
Author

I ran the tracegen tool and those are the results:

2018-12-12T15:25:45.913-0500    INFO    tracegen/main.go:55     Initialized global tracer
2018-12-12T15:26:15.917-0500    INFO    tracegen/worker.go:93   Worker 3 generated 519026 traces        {"worker": 3}
2018-12-12T15:26:15.917-0500    INFO    tracegen/worker.go:93   Worker 0 generated 516428 traces        {"worker": 0}
2018-12-12T15:26:15.917-0500    INFO    tracegen/worker.go:93   Worker 1 generated 515793 traces        {"worker": 1}
2018-12-12T15:26:15.917-0500    INFO    tracegen/worker.go:93   Worker 2 generated 517417 traces        {"worker": 2}
2018-12-12T15:26:15.917-0500    INFO    tracegen/main.go:59     Waiting 1.5sec for metrics to flush
olivierboucher in ~/go/src/github.com/jaegertracing/jaeger on plugin-framework *> (production:default)
$ tracegen --duration 30s --workers 4
2018-12-12T15:27:18.570-0500    INFO    tracegen/main.go:55     Initialized global tracer
2018-12-12T15:27:48.572-0500    INFO    tracegen/worker.go:93   Worker 1 generated 729523 traces        {"worker": 1}
2018-12-12T15:27:48.572-0500    INFO    tracegen/worker.go:93   Worker 2 generated 732114 traces        {"worker": 2}
2018-12-12T15:27:48.572-0500    INFO    tracegen/worker.go:93   Worker 0 generated 732004 traces        {"worker": 0}
2018-12-12T15:27:48.572-0500    INFO    tracegen/worker.go:93   Worker 3 generated 730584 traces        {"worker": 3}
2018-12-12T15:27:48.572-0500    INFO    tracegen/main.go:59     Waiting 1.5sec for metrics to flush

First is using the grpc plugin, second is using memory. It looks like it has a huge performance impact but I think that was expected. Is this tolerable ?

@yurishkuro
Copy link
Member

Could you elaborate how you ran this? Did you just run the binary twice against different backends? The tracegen is capable of saturating the UDP stream, so the numbers it outputs are not meaningful for this benchmark, what we actually need to look for is the write latency in the collectors and the number of spans saved (both should be available via Prometheus metrics).

Another type of test we could do is to try to saturate the storage, i.e. vary the -duration param of tracegen and run it for longer periods, looking for the value when the the collector starts dropping spans because the storage does not keep up. This test is harder to run though.

@olivierboucher
Copy link
Author

@yurishkuro

You're correct I did ran tracegen against the 2 backends.

Can you help me by telling me which metrics are meaningful?

I still have all the prometheus metrics stored locally.

@yurishkuro
Copy link
Member

yurishkuro commented Dec 12, 2018

These are the metrics that should be coming out of any storage

type WriteMetrics struct {
Attempts metrics.Counter `metric:"attempts"`
Inserts metrics.Counter `metric:"inserts"`
Errors metrics.Counter `metric:"errors"`
LatencyOk metrics.Timer `metric:"latency-ok"`
LatencyErr metrics.Timer `metric:"latency-err"`
}

But, for some reason, I cannot find them from all-in-one.

Try looking for something like these:

jaeger_collector_save_latency_bucket{host="hostname",le="0.005"} 518383
jaeger_collector_save_latency_bucket{host="hostname",le="0.01"} 518424
jaeger_collector_save_latency_bucket{host="hostname",le="0.025"} 518536
jaeger_collector_save_latency_bucket{host="hostname",le="0.05"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="0.1"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="0.25"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="0.5"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="1"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="2.5"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="5"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="10"} 518563
jaeger_collector_save_latency_bucket{host="hostname",le="+Inf"} 518563
jaeger_collector_save_latency_sum{host="hostname"} 26.363240858001458
jaeger_collector_save_latency_count{host="hostname"} 518563

@olivierboucher
Copy link
Author

olivierboucher commented Dec 13, 2018

I have limited testing capabilities on my MBP (16GB ram fills up quickly)

Here is a graph, my prometheus instance scrape rate is tuned to 2s for this test:

image

First I ran grpc, then memory. From my understanding there only seem to be a ~1ms difference which leaves me perplex

EDIT: more graphs

image

image

image

Looks like most of the spans were rejected

@olivierboucher
Copy link
Author

I think I understand what to do in order to make it work. I'll define trace ids as bytes just like it is done in model.proto and add the gogo annotations. Not sure how this will translate for other languages but let's focus on go for now

@isaachier
Copy link
Contributor

isaachier commented Dec 14, 2018

I believe the Protobuf compiles for any language (despite gogo-specific annotations), as long as the gogo Protobuf files are available for inclusion.

See this C example I was playing with: https://github.com/isaachier/jaeger-client-c/tree/master/idl.

Added proper annotations to storage.proto instead of exposing TraceID in model.proto.
Tests are back to green.

Signed-off-by: Olivier Boucher <info@olivierboucher.com>
Moved DependencyLink to model.proto since it had to be used and the existing struct did not implement any Marshaler/Unmarshaler methods.

Changed storage configuration to allow testing via mocks.

Added DependencyReader interface to the StoragePlugin interface.

Signed-off-by: Olivier Boucher <info@olivierboucher.com>
@olivierboucher
Copy link
Author

olivierboucher commented Dec 15, 2018

I fixed the issue by reverting changes to TraceID but I had to move DependencyLink to model.proto in order for the plugin interface to implement dependencystore.Reader. It does not break anything as far as test goes. Is there any problem with that?

EDIT: @isaachier I meant that other languages have to look at the go implementation because they only see bytes for TraceID. It's just a hiccup.. plugin writers will probably stick to go anyway.

@yurishkuro
Copy link
Member

I fixed the issue by reverting changes to TraceID but I had to move DependencyLink to model.proto in order for the plugin interface to implement dependencystore.Reader. It does not break anything as far as test goes. Is there any problem with that?

That should be fine, although we may want to do that as a separate PR. Also, there seem to be any changes in the generated file, something is off, most likely a different version of the generator.

@olivierboucher
Copy link
Author

That should be fine, although we may want to do that as a separate PR. Also, there seem to be any changes in the generated file, something is off, most likely a different version of the generator.

What versions of protoc/protoc-gen-go should I run locally? Maybe this is off, I'm running 3.6.1.

So if we want to do this in 2 separate PR I would have to do the DependencyLink one first because this one depends on it.

@yurishkuro
Copy link
Member

Yes, we can do proto change as a separate (first) PR.

I don't think the version of protoc matters much, it's the version of gogo that's important. The Makefile has a target proto-install that installs the gogo generators from the /vendor directory.

@yurishkuro
Copy link
Member

this seems to be a blocking issue #1258

@olivierboucher
Copy link
Author

olivierboucher commented Dec 16, 2018

I fixed this issue adding the prune settings in Gopkg.toml. I will make a PR for those changes

EDIT: may have spoken too fast after overlooking the issue.

I added prune settings that brought back the missing proto files, it fixed´make proto’

@chvck
Copy link
Contributor

chvck commented Jan 4, 2019

I'm not sure if this is the right place for this but it might be worth consideration. I've had a quick look at implementing a plugin using this and it seems nice, I have an issue trying to create a plugin from outside of jaeger due to vendoring though. In my own package if I do

plugin.Serve(&plugin.ServeConfig{ HandshakeConfig: plugin.HandshakeConfig{ MagicCookieKey: "STORAGE_PLUGIN", MagicCookieValue: "jaeger", }, VersionedPlugins: map[int]plugin.PluginSet{ 1: map[string]plugin.Plugin{ shared.StoragePluginIdentifier: &shared.StorageGRPCPlugin{ Impl: &store, }, }, }, GRPCServer: plugin.DefaultGRPCServer, })

then I see

cannot use shared.StorageGRPCPlugin literal (type *shared.StorageGRPCPlugin) as type "github.com/hashicorp/go-plugin".Plugin in map value: *shared.StorageGRPCPlugin does not implement "github.com/hashicorp/go-plugin".Plugin (wrong type for Client method) have Client(*"github.com/jaegertracing/jaeger/vendor/github.com/hashicorp/go-plugin".MuxBroker, *rpc.Client) (interface {}, error) want Client(*"github.com/hashicorp/go-plugin".MuxBroker, *rpc.Client) (interface {}, error)

I could be doing this wrong/jumping the gun but I thought that it was worth bringing it up just in case.

@yurishkuro
Copy link
Member

@chvck this often happens if your code is pulling jaegertracing/jaeger from GOPATH instead of a flat vendor dir inside your repo.

Copy link
Member

@yurishkuro yurishkuro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@olivierboucher I think in this PR you have a proof of concept that the approach works, so I would really like to see it move forward. I have a proposal - could you pull all proto-related changes into a separate PR and let's try to merge it first? It should include changes to the Makefile & Gopkg. Once it's merged it will be easier to iterate on the code. The reason I prefer it to be separate is because (a) it's the part that creates the most conflicts with master at the moment, and (b) see my comment about Badger/#760 PR - once we have the IDL merged we could proceed on that direction independently of the plugin work.

The only thing to do before splitting proto-changes is to decide on my comment about splitting the Storage proto service into underlying reader/writer/etc services, similar to how Jaeger code is structured. If we don't do that split, then we're creating a monolith storage, which may not be always practical, like those methods about sampling storage that you commented out. So maybe worth trying the split first in this PR, and if it works, then fork a new PR just for proto changes.

-I vendor/github.com/grpc-ecosystem/grpc-gateway \
-I vendor/github.com/gogo/googleapis \
-I vendor/github.com/gogo/protobuf/protobuf \
-I vendor/github.com/gogo/protobuf \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use $(PROTO_INCLUDES)?


.PHONY: plugin-proto
plugin-proto:
protoc \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is not an independent model, let's add the compile step to the main proto target, e.g. before model/proto/model_test.proto

@@ -0,0 +1,201 @@
syntax = "proto3";

package jaeger.api_v2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this correct?

option go_package = "proto";

import "gogoproto/gogo.proto";
import "model.proto";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please make it the last import, separated by a blank line

//
//message WriteDependenciesResponse {
//
//}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove dead code

TraceID: traceID,
})
if err != nil {
return nil, fmt.Errorf("grpc error: %s", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use "github.com/pkg/errors" to wrap the error instead of converting to a string

@@ -0,0 +1,335 @@
// Copyright (c) 2018 The Jaeger Authors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please spit this file into client.go and server.go

string message = 1;
}

service StoragePlugin {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should rethink this naming. Once #760 is merged, it can only be used with all-in-one because collector and query service still need to share the in-memory storage. It would be nice to build a standalone "remote storage" component that can be run as a separate service and accessed by collector/query via gRPC API. This file is that API, but it wouldn't be an API of just the harshicorp-plugin, it's an abstract gRPC Storage API .

// dependencystore/Writer
// rpc WriteDependencies(WriteDependenciesRequest) returns (WriteDependenciesResponse);
// dependencystore/Reader
rpc GetDependencies(GetDependenciesRequest) returns (GetDependenciesResponse);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a strong reason for merging all different types of storage into a single proto service? Could we not have the same component implement different services?

"time"
)

type Store struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the purpose of this struct? Given its 1-1 delegation pattern, couldn't shared.StoragePlugin type used directly in place of wherever this Store type was meant to use?

@chvck
Copy link
Contributor

chvck commented Jan 25, 2019

@olivierboucher - do you need any assistance in completing this work?

Also, just to raise this for awareness as I'm not sure it's an issue for this PR or should be addressed once this is in but when I request ~200+ traces I'm hitting gRPC message size limits.

Unhandled Rejection (Error): HTTP Error: grpc error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (4467848 vs. 4194304)

@annanay25
Copy link
Member

@chvck could you kindly open an issue with your description? We would investigate it independently of this PR :)

@chvck
Copy link
Contributor

chvck commented Feb 8, 2019

@yurishkuro am I correct in thinking that a rough plan here would be:

  1. Split the proto storage service into something that more closely matches current implementations.
  2. Split out the proto changes and made a new PR from those.
  3. Update the Go plugin to address comments in this PR.

Sound about right? Also, what would be the best way for me to go about doing this? I think that I'm going to have to fork and create new PRs even for just updating the Go plugin part.

@annanay25 I'd be happy to create an issue but does it make sense outside of this PR? I think that it's specific to the span data volume returned over the grpc service implemented in the PR.

@yurishkuro
Copy link
Member

@chvck yes, although I'm not sure about the difference between 1 and 2.

chvck added a commit to chvck/jaeger that referenced this pull request Feb 8, 2019
Relating to issue jaegertracing#422 and review jaegertracing#1214, add proto changes
to later allow for gRPC plugins to be used.

Signed-off-by: Charles Dixon <chvckd@gmail.com>
chvck added a commit to chvck/jaeger that referenced this pull request Mar 22, 2019
Relating to issue jaegertracing#422 and review jaegertracing#1214, add proto changes
to later allow for gRPC plugins to be used.

Signed-off-by: Charles Dixon <chvckd@gmail.com>
@jpkrohling
Copy link
Contributor

What's the state of this PR?

yurishkuro pushed a commit that referenced this pull request Apr 3, 2019
* Add gRPC plugin proto changes

Relating to issue #422 and review #1214, add proto changes
to later allow for gRPC plugins to be used.

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Change FindTraces signature to return a stream.

FindTraces can hit grpc message size limits if a large number of
spans are requested, using a stream mitigates this issue.

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Satisfy gofmt tool

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Change proto package and service names

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Delete commented out spanstorage

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Change FindTraces response to be a stream of spans

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Change from EmptyMessage to google.protobuf.Empty

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Move from using StoragePluginError to google.rpc.Status

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Remove commented code and clean up proto formatting

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Remove protobuf responses and only return successes, rely on Status
for errors

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update Gopkg lockfile

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update Span type to come from model.proto

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update storage proto file

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Add generated storage plugin file to lint ignores

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Lint ignore grpc plugin generated code by name not path

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Rename FindTracesResponseChunk to SpansResponseChunk

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Add marshal/unmarshal tests for DependencyLink

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Add tests for storage protos with custom types

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Run fmt and ignore storage_test for linting

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Remove DependencyLinkSource and use string

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update headers

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Add SpansChunkResponse test

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update makefile protoc calls

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update proto generated files and update license script

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update generated storage file to new proto layout

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Add storage generated files to import order cleanup ignores

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Move storage generated file to proto-gen dir

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Remove generated plugin storage file from script ignores

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Fix copyright headers

Signed-off-by: Charles Dixon <chvckd@gmail.com>

* Update storage_test generated file

Signed-off-by: Charles Dixon <chvckd@gmail.com>
@chvck chvck mentioned this pull request Apr 5, 2019
@chvck
Copy link
Contributor

chvck commented Apr 5, 2019

What's the state of this PR?

@jpkrohling It seems to be stale now. I've just created #1461 which builds on top of the work in this PR.

Signed-off-by: Yuri Shkuro <ys@uber.com>
@yurishkuro
Copy link
Member

superseded by #1461

@yurishkuro yurishkuro closed this May 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants