What is Stampede?
Stampede is a very fast web server written in Go. It uses an in-memory cache in order to do helpful internet marketing things. It can track links, serve files, track events, and serve pixels. Some of these are GeoIP enabled.
Does it have an interface or GUI?
Yes, Stampede comes with a built-in admin GUI called the “interface”. It runs as a separate server from the “main” Stampede server but it’s included in the same binary.
The interface allows you to do things like adding and managing users, modify your keys, reload your node servers, etc…
There is no tracking/graphing interface though. Stampede produces JSON formatted logs, and to process those logs into real-world data and graphs you will need to use a tool like and ELK stack.
What OS does Stampede run on?
Being written in Go, it can run on basically any OS out there. It’s targeted towards Linux, and we give the most love and time to popular Linux server distros like Debian, Ubuntu, and CentOS.
We do provide builds for Windows as and MacOS as well. If you need a build for something else, email support and we’ll see what we can do.
What other dependencies does Stampede need?
The biggest is you need to run PostgreSQL. Outside of that, there is one package that needs to be installed on Linux (ca-certificates) to make sure it works but this is all covered in the install documentation.
You also need a way to process your log data. Stampede was built around the notion of using an ELK stack. At the end of the day Stampede produces line JSON logs, and there are a lot of tools you could use both free and paid.
What is an ELK stack?
ELK is short for Elasticsearch, Logstash, and Kabana. This is a trifecta of tools provided by elastic.co.
Elasticsearch is a database that is very good for handling time series data.
Logstash is a tool that takes in log data from various sources, processes it, and inserts it into Elasticsearch.
Kabana is a GUI interface for searching, analyzing, and graphing your Elasticsearch data.
Why do you require a separate tool to process the log data?
Tools like ELK are the future in our opinion. We already send a bunch of different log data to our ELK stack from sources. It seemed natural to not re-invent the wheel.
An ELK stack is super flexible and you can send a wide array of data to it and not just Stampede logs.
It’s a little bit more work to get set up, and you have to learn a few things, but once you do we feel like you’ll see the benefits.
Do you support other tools other than ELK?
Currently no. If you need help getting your data into another tool we are happy to help, just send an email to support.
What are the system requirements?
This varies from what you are doing, how big your data is, and how much traffic you are pushing to a single node.
Are you just doing link tracking/redirects? Well, the data needed in memory is really small. Let’s say you have 300 redirect keys, the process memory size while running will be like 40 MB. Again this is just from my experience and your mileage will vary.
Are you serving banner images? The entire image is stored in memory for each key that needs that image. So depending on how many keys and how big the images are would determine how much memory you needed. Images take longer to write out in a response due to being larger in size so you’ll see lower req/second doing images.
The more CPUs you throw at it the more requests you can process. A 2 CPU VPS can max out at ~5k redirects/second. My high end 4 core Intel i7 can do upwards of 40k redirects/second. So obviously your mileage will vary depending on hardware, latency, type of action, if using geoip, etc…
You also need to be able to write the requests to log files. On my dev machine doing 40k req/second, it’s about 15 MB/second. Even disk platter drives should be able to hit that. You’ll fill drives up fast at that rate though so obviously make sure to rotate your log files and have enough space to house them.
For production use I’d recommend to start with at least 2 CPU VPS with 2 GB of RAM. Adjust the specs as you get more data.