Skip to content

How Mashape Manages Over 15,000 APIs & Microservices

Mashape powers API-driven software. Hundreds of thousands of developers use our tools to manage, monitor, consume and provide APIs to their partners, apps, customers and employees. In fact, billions of API requests are processed through Mashape’s Marketplace every month.

Ahmad Nassri
Ahmad Nassri
7 min read
How Mashape Manages Over 15,000 APIs & Microservices
ℹ️
Originally Posted On Stackshare.io

Engineering Team

The Mashape engineering team is one of the most highly skilled teams I’ve had the pleasure to work with. While we operate as a single team, our responsibilities are split and organized per project. Currently we have three main projects:

As our team is spread out among San Francisco, Toronto, Montreal, Paris and London, we have faced a number of challenges in scheduling, communications and syncing. To overcome these issues, we follow the open source model of building software and collaborating; ensuring that all communication is clear and verbose, assuming asynchronousity, and keeping a communication channel open at all times on Flowdock, while regularly jumping on Google Hangouts whenever further discussion is needed.

Despite having a focus of responsibilities, the engineering team members are not isolated in silos. We encourage everyone to work and touch different parts of the codebase, across all projects; in fact, most of us will end up working on Mashape Analytics, Marketplace and Mashape ID all in the same week.

Mashape’s design team interacts with the engineers on a daily basis. By compiling engineer comments and gathering feedback from users, we’re able to design the best Developer Experience (DX) by focusing on our users first. It also helps that we are engineers and developers!

We have regular one hour conference calls every Monday where the previous week’s work is reviewed, and new objectives for the coming week are established.

Engineering Team

(API marketplace team meeting, mid 2014)

General Architecture

Once upon a time, Mashape’s architecture consisted of around 150K lines of Java, a true Monolith! But it’s now split into 400 different microservices built with a variety of languages including: Node.js, Lua, Ruby, Java, Python, and OCaml. Everything is an API.

Mashape ID, as an example, is an identity authentication platform operating entirely as an API. Any time a new product is created, we simply plug it into Mashape ID; which provides out of the box account & email management with a companion billing microservice of its own.

One of our new projects is powered by a Ruby API, while the frontend app is built in Ember.js. Conversely, Mashape’s marketplace has Java-powered internal APIs which in turn are consumed by a number of Express.js microservices serving as the web frontend; all our applications & APIs follow The 12-factor methodology.

In front of all of our applications and APIs is Kong. We use ElasticSearch for search while also storing data in a number of databases including: MongoDB, PostgreSQL, Redis and Cassandra (each solving a different problem domain). All of our products run on AWS, though on occasion we use Heroku and DigitalOcean for smaller projects to experiment with prototypes or test environments. Splunk is our centralized logging system, Datadog as a unified monitoring dashboard, and PagerDuty for alerts.

Mashape pushes business intelligence metrics from all our products directly into Chart.io. Additionally, we dogfood quite a lot by running Mashape Analytics to monitor and visualize API traffic for the APIs managed by our platforms. TravisCI is utilized for testing, CodeClimate for quality control, while using Chef for deploys. All our code is on Github. For code snippet generation we use our own HTTPsnippet and Mockbin for mocking & testing API prototypes.

All of these applications and systems, alongside all the deployment, monitoring & metrics tools may seem like a lot, but the microservices approach coupled with the 12-factor methodology allows for zen & simplicity in a sea of chaos!

During the course of a slow week we have ~400 internal services running over a cluster of hundreds of EC2 machines (consisting of mainly large instances). Having a microservice architecture allowed us to move much faster and independently, we clearly saw the benefits of the transition. But it’s probably helpful to understand why we made the transition.

Scaling challenges

Sometime around 2013, we experienced scaling issues with the dozens of standalone third-party APIs added to Mashape’s Marketplace.

Between July 2012 and July 2013 our volume in API transactions grew 50X.

Scaling Problems

Scaling Problems

Not only was the volume of requests passing through our proxy taking off, Mashape was starting to manage more APIs than anyone else in the market. At that time, the Marketplace had 3,500 public APIs under its belt.

The Marketplace needed to scale fast. Our API proxy had been built in Node.js, with a heavy Java backend. It was a monolithic architecture, and our Node.js proxy could not handle large traffic spikes.

Overtime it became unpredictable and resulted in consuming larger amounts of AWS resources. Our bank account suffered as a consequence and our CEO lost some hair.

We had to find a solution that satisfied not only the massive amount of traffic, but also be best in class in reliability and security. We wanted to move away from a monolithic architecture towards something much more flexible.

Architecture

Architecture

We’ve always had a desire to create our own API gateway; targeting the following attributes:

  1. Predictable stability over months and years, not days.
  2. Efficient and easy to scale across multiple geographic regions with little overhead.
  3. Modular and extensible via simple plugins.
  4. Independent: environment, framework & language agnostic.
  5. Easily configurable via a RESTful API.

Basically an “ElasticSearch” for API management. Our CTO began his search and eventually met an engineer at CloudFlare, who introduced us to the amazing world of OpenResty. Built on top of Nginx it allows a developer to extend Nginx functionalities with Lua, an embedded scripting language used by many heavy traffic websites.

CloudFlare built Lua scripts around security and caching. We thought we could do the same but with a focus on API and microservice management. Nginx was already proven worldwide as a stable and trustworthy HTTP server, and was perfect for our base requirements. In combination with OpenResty, we started building Kong.

Kong is born (the API Gateway)

In 2013, Mashape adopted Nginx / OpenResty to implement features on the Marketplace’s published APIs. Billing, authentication, throttling and rate-limiting became custom OpenResty plugins written in Lua. We called it Kong (yes, like “King Kong”, after our own namesake: MashAPE).

King Kong

King Kong

Today Kong efficiently manages over 15,000 APIs. Serving both our internal microservices and our Marketplace clients’ APIs, all while handling spikes, high concurrent scenarios, and billions of successful API calls per month.

We created our own “ElasticSearch” for API management; much like Elastic’s approach to wrapping Lucene in a RESTful interface with some sugar on top, Kong was a wrapper around Nginx.

Below are some examples of Kong’s operations within the Marketplace.

Once an API is added to the Mashape platform from the GUI, it sends a POST request to Kong:

curl -i -X POST \
  --url http://localhost:8001/apis/ \
  --data 'name=mockbin' \
  --data 'upstream_url=http://mockbin.com/' \
  --data 'request_host=mockbin.com'

Then, based on the API provider’s needs, a collection of various plugins are added on top of it. As an example, if the user needs Key Authentication:

curl -i -X POST \
  --url http://localhost:8001/apis/mockbin/plugins/ \
  --data 'name=key-auth'

User management is a breeze; the Consumer object allows for an abstract entity that can represent Users, Applications, Clients, or any system accessing the API. They are easily created and managed through the RESTful interface.

curl -i -X POST \
  --url http://localhost:8001/consumers/ \
  --data "username=Jason"

Kong’s RESTful API allows us to manage thousands of APIs in the Marketplace. With simple and single-focused codebases, we keep our Microservices and individual systems completely independent. Kong itself, by its pluggable nature, can be described as a paragon of microservice architecture. At its core, it consists of no more than a couple thousands lines of code, which handle database abstraction, routing and plugin management. Plugins can live in different code bases and inject themselves anywhere into the lifecycle of a proxied request in a few lines of code. We are very proud of this approach which is quite unique in API management gateways today.

Kong was released to the public in April 2015 - after being partly rewritten for the occasion - and has built a strong community of users and contributors since then. We are very grateful for the contributions brought by developers all around the world, and feel rewarded for our decision to open-source Kong, all the while contributing back to the community by improving many Lua packages used in OpenResty.

We generate Kong packages for many Linux distributions and cloud providers to allow ease of installation and deployment. The most successful distribution so far is Kong on Docker with over 180,000 downloads since it’s release.

The Road Ahead

Every month, we release more pieces of Kong to the open-source realm as part of our commitment to drive and grow the open-source API developer community. We are working hard to make Kong reach the 1.0 public milestone so that everybody can enjoy what this internal tool brought to our company and our community.

We are also working on a number of other products that complete the family of services we offer to API developers, and we can’t wait for you to hear all the exciting announcements we are going to make over the course of the next few months!

Our Vision

Inside our office we have giant pictures of the first Ford Assembly Line, serving as our inspiration.

Over 100 years ago the second Industrial Revolution happened. Among many new inventions came two in particular: Electricity and The Assembly Line (thanks to electricity). This combination allowed Mass Production for the first time. The result was a dramatic decrease in the cost and time required to bring a product to market. Mass production quickly became a worldwide phenomenon, building companies worth billions. It changed the world.

We see a huge correlation in the software industry today: electricity is cloud computing, while the assembly line is APIs, where you pick up the components / services you need to build your product. This is history repeating itself and we want to be the pioneers of the next revolution, to change the world, yet again.

Assembly Line

Assembly Line

If you have questions about the transition we have made and the lessons learned, or about Kong and our open source projects, please drop me a line at: ahmad@mashape.com - always happy to chat!

Blog

Ahmad Nassri Twitter

Fractional CTO, Co-Founder of Cor, Developer Accelerator, Startup Advisor, Entrepreneur, Founder of REFACTOR Community. Previously: npm, TELUS, Kong, CBC/Radio-Canada, BlackBerry


Related Posts

The Modern Application Model

Most of today’s software development is occurring within the Application Layer as defined by the OSI Model. However that definition is a bit dated and does not exactly reflect today’s technology. We need a new model to visualize what we, Application Software Developers, are building.

The Modern Application Model

The New Normal — Scaling the challenges of a Modern CTO

Whether you’re a Founding CTO or an Enterprise CTO, you cannot go at it alone. You have to hire a team around you to help delegate and distribute a modern CTO’s responsibilities and overlapping technical domains, such as CIO, CDO, CMO, etc.

The New Normal — Scaling the challenges of a Modern CTO

Challenges of a Modern CTO

A CTO needs to be able to operate and be experienced in many areas beyond just the tactical. To be successful, they require Technical & People Leadership experience.

Challenges of a Modern CTO