Skip to main content


Denver-Based Cloud Elements Latest Member to Join Open API Initiative

By Blog

The Open API Initiative is pleased to welcome Cloud Elements, a unified platform for API integration and management out of Denver Colorado as the latest company to join the OAI. Here is what our newest members had to say about the benefits of joining the OAI and why they contribute to the OpenAPI Specification.

Mark Greene, CEO of Cloud Elements

Mark Greene, CEO of Cloud Elements

Mark Geene, CEO and co-founder shares “We are thrilled to be the newest members of OAI, to join in the forces to create, evolve and promote the vendor neutral API spec. Since our early days, Cloud Elements has recognized the importance of open, machine-readable API docs and have always been committed to Swagger.”

The Cloud Elements Summer Hike Days at Matthew Winters near Red Rocks.

The Cloud Elements Summer Hike Days at Matthew Winters near Red Rocks.

Joining OAI is an integral decision to our product development strategy here at Cloud Elements, which we believe will further enhance the simplicity of our API Integration platform, and continue to reduce the burden on our customers to combine Cloud Elements with other API Management Platforms and API Gateways. Ultimately we believe that the Open API Initiative not only saves a significant amount of time for our own developers, but our customers’ developers, as well.

Building APIs to a specification is a major key to success. Rocky Madden, our Senior Platform Engineer, chimes in, “Not only has the Open API Initiative taken on the Swagger Specification, but it is stepping up it’s game with v3.0, improving how APIs can be discovered and understood by humans and machines alike. Interoperability is what OAI Is all about. It’s extremely powerful when tools and services are able to be built around the same common spec, allowing us to create robust integrations, improve how we interop with other APIs, reduce time to delivery, and so on.”

Rocky Madden, Senior Software and DevOps Engineer, Cloud Elements

Rocky Madden,
Senior Software and DevOps Engineer, Cloud Elements

Madden continues, “For us specifically, the OpenAPI  Specification has improved our own platform and helps us quickly add new Elements (endpoints) to our integration catalog. The standardization has also made it possible for us to tap into other cloud services that are also built on OAI. Our partnerships with companies like Amazon AWS will allow developers to create REST APIs which can be published into such marketplaces easily, made possible, in part, by collaborating with the Open API Initiative.”

There are a lot of companies, including ourselves, joining the movement to support the Open API Initiative. We couldn’t be more thrilled to be a part of OAI and continue to create and evolve the API Description format.

TDC: Documentation, explaining the 3.0 spec part 5

By Blog

With the version 3.0 of the OpenAPI Specification nearing a beta candidate, this series of posts is meant to provide insight into what is changing and how from the perspective the Technical Developer Community (TDC). Earlier posts in the series described the background and rationale behind the next evolution of the spec, some Structural Changes, Request Parameters, and Protocol and Payload.


The huge variety of tooling that grew up around previous versions of the OpenAPI Specification suggests that those developers had access to the information necessary to build those tools. Not only should the 3.0 version continue that success, but with increased participation from the members of the TDC and community, the specification should be even more accessible, clear, and unambiguous than before.

Table of Contents

A table of contents has been added to the specification by Rob Dolin in order to provide new readers a quick overview of the document structure, and it will make it easier to access relevant parts of the specification reference. As the spec changes begin to settle down, the documentation effort will pick up momentum, and the spec will become more accessible.

Common Mark

The 2.0 specification used GitHub-Flavored Markdown (GFM) in order to provide rich text descriptions of services. Unfortunately, GFM has no formal specification itself, and some of its features work only for content hosted on GitHub. For this reason, the 3.0 draft of the OpenAPI Specification has adopted the CommonMark format, which will enable tools to be more consistent in their rendering of Markdown. CommonMark is mostly compatible with GFM, so this change has little downside or impact on existing documentation. And with the detailed specification that CommonMark brings, the increased precision will have less ambiguity and should be a boon in general.

The next post will discuss other miscellaneous outstanding issues that have been raised by the community.

Darrell MillerAbout The Author

Darrel Miller is a Senior Software Development Engineer on Azure API Management for Microsoft. Darrel is a member of the OpenAPI Specification Technical Developer Community. You can follow him on Twitter or on his blog Bizcoder.

TDC: Protocol and Payload, explaining the 3.0 spec part 4

By Blog

With the version 3.0 of the OpenAPI Specification nearing a beta candidate, this series of posts is meant to provide insight into what is changing and how from the perspective the Technical Developer Community (TDC). The first post described the background and rationale behind the next evolution of the spec, the second covered Structural Changes, and the third discussed request parameters.

Protocol and Payload

The OpenAPI Specification has had great success describing standard request/response HTTP APIs. However, many in the community have expressed an interest in describing distributed APIs beyond the simple HTTP model, such as WebSockets APIs, RPC APIs, Hypermedia APIs, and publish/subscribe APIs. After much discussion by the TDC, the goal is to extend the specification to some of these new use cases without adding significant complexity to the existing use cases.


Webhooks leverage HTTP in an publish/subscribe pattern, and they have become a popular pattern among API providers, including Slack, GitHub, and many other popular services. Webhooks are simple to use and fit nicely into an existing HTTP-based style of API. However, one criticism of the OpenAPI spec was that it had no way to describe an outbound HTTP request and its expected response. The new callback object makes this possible. A callback object can be attached to a subscribe operation in order to describe an outbound operation that a subscriber may expect.

Path Item Out Going


Many approaches for describing hypermedia APIs have been proposed to the OpenAPI repository on GitHub. A major problem is that a static description of resources in a hypermedia API runs counter to the runtime/discovery philosophy strengths of hypermedia APIs. Nonetheless, the ability to describe static relationships between resources in an API would have some benefit. To this end, the 3.0 draft specification introduces the links object in order to describe which new resources may be accessed based on the information retrieved from an initial resource. This is not necessarily hypermedia-driven in that, the URLs to the new resources have not been embedded in the returned payload, but they are constructed based on rules defined in OpenAPI Specification. A new expression syntax has been introduced to allow information from a response to be correlated with parameters in the linked operation.

Path Item Link Object

The static description of links between resources should allow the generation of more useful documentation and client libraries that can encapsulate the process of traversing from one resource to another. This could allow client libraries that reduce the coupling between client applications and server-defined resource hierarchies.

JSON Schema

A number of requests were made to expand the subset of JSON Schema that the OpenAPI spec allows to include more complex features of JSON Schema. In the 2.0 spec process, the potential tooling complexities around code generation prompted the exclusion of anyOf and oneOf. However, many users have requested relaxing that constraint, even though it would compromise tooling support for those features. This is one of the great challenges in spec design, and it is never easy when making choices like this to know whether it is better to give people sharp tools that they could cut themselves with, or to rely on experience to say no, the burden of this responsibility is too great. While OpenAPI 2.0 took the more conservative approach, the user base has grown more experienced, so some of the restrictions are being lifted, and users will have to make smart choices.

Alternative Schemas

With the improved support for non-JSON media types, the limitations of using only JSON schema to describe payloads is becoming more untenable. TDC is currently exploring options of how to enable describing the schemas of non-JSON payloads. If this challenge can be overcome, it may become possible to remove the form parameter type completely and to enable support for protocols like gRPC that use protobuf and protobuf schema.

The next post will discuss some planned improvements to Documentation.

Darrell MillerAbout The Author

Darrel Miller is a Senior Software Development Engineer on Azure API Management for Microsoft. Darrel is a member of the OpenAPI Specification Technical Developer Community. You can follow him on Twitter or on his blog Bizcoder.

OpenAPI Spec at APISTRAT 2016

By Blog

The Open API Initiative (OAI) is focused on creating, evolving and promoting a vendor neutral API Description Format based on the Swagger Specification. As an open source project under the Linux Foundation, the OAI is committed to developing and promoting the OpenAPI Spec for use by all. We welcome contributions from members and non-members alike.

There are many things you and your organization can do to help to community.

  1. Leverage the spec: There are no fees or membership requirements to use the OpenAPI Specification: GitHub | OpenAPI Specification
  2. Develop the spec: All members of the community are invited to take part in conversations on how to evolve the spec. Find one of our meta issues that applies to your project. If you don’t see one here log a new issue.
  3. Share how you use the spec.
  4. If you are looking to support the project directly, consider having your company join the OAI as a member. Fill out this form to learn more. The project charter is here.

Not sure how to get started? Read how to participate here.

Learn More at APISTRAT 2016

November 3rd

API DESIGN & GOVERNANCE | 11:00 am – 12:30 pm
Matthew Reinbold, Lead, API Center of Excellence, Capital One DevExchange
What happens when a company decides to API everything? Organizations may be quick to follow Netflix and Amazon in the pursuit of microservice and service orientated architecture (SOA). But without the application of Conway’s Law, any governance effort (and, by extension, the API program) will fall short. Matthew has been instrumental in establishing and growing API governance programs at multiple enterprise companies. In this talk he will discuss effective API governance and the challenges in achieving it.

MICROSERVICES: gRPC or REST? Why not both? | 11:00 am – 12:30 pm
Sandeep Dinesh, Developer Advocate, Google Cloud
In this talk, Google’s Sandeep Dinesh will show you how you can build a gRPC endpoint that can intelligently serve gRPC over HTTP/2 while simultaneously serving JSON/REST over HTTP/1.1 on the same port! Then, he’ll walk through some benchmarks and best practices for deploying these microservices in a scalable way. Read more here


Hypermedia vs Graphs: Best buddies or the next API battleground? | 1:30 pm – 3:00 pm
Gareth Jones, Microsoft
We all wonder if this is the year that hypermedia becomes mainstream? But coming up on the rails, there is a new challenger: Graph-shaped APIs that predefine a wide network of relationships promise some of the benefits of hypermedia APIs, but present a more familiar programming model. Will graphs push aside hypermedia before it’s even had a chance, or will both styles play nicely? Is this trend a blocker to HATEOS nirvana or a stepping stone?
I’ll challenge the audience to consider combining these two approaches to open the door to mainstream hypermedia use, enriching a fixed graph with dynamic data and behavior. Volatile data and logic can use hypermedia and more foundational data can use graph approaches. In this way we can add value and depth to our apps without the drastic rewrites we so often expect from the transition to hypermedia. Read more here


Breathing new life into legacy SOAP services | 3:05 pm – 4:35 pm

Darrel Miller, Software Developer, Microsoft
The reality is that SOAP services are no longer cool. Developers today want to integrate with APIs that are labelled REST. They want descriptive URLs, JSON payloads and familiar HTTP status codes. But many enterprises spent 10 years building SOAP services and many of those services are working just fine today. Rewriting them would be a huge effort with the risk of minimal gain. The good news is that it is possible to give developers what they are looking for without a re-write. You can take advantage of HTTP’s layered architecture to put a façade in front of your SOAP services, re-use all your existing code, breathe new life into your service and still support the existing client applications that are happily sending SOAP messages.This talk will explore the process of transforming native HTTP requests into SOAP messages and back into native HTTP responses. We will discuss which parts of the façading process can be automated and which parts require design decisions. Finally we will explore what capabilities we gain with this new style of API, and what we lose, so that you can make an informed decision on the future of your SOAP services.

The Big Problem with the Big Picture | 3:05 pm – 4:35 pm
Amber Fallon, SmartBear Software
APIs are BIG. Bigger than many of us, even those of us in the industry, may realize – in fact, the total number of public APIs has exploded from under 7,500 in 2012 to over 15,000 in 2015. Larger companies have larger API structures, with more dependencies and integrations than ever before, and that number is growing. Entire business models have formed around the APIs provided by some of the industry’s heaviest hitters. Imagine app juggernauts like Uber or Waze functioning without Google Mapping technology, or the important relationship between Netflix and the underlying applications that power its video streaming – these applications are absolutely dependent on their underlying APIs. And, new applications dependent on public APIs crop up every single day.
With that scale come challenges like managing your public API in a way that will foster growth of dependent APIs, scaling up, maintaining your SLA, and allowing for the versatility of user created applications; your API may be used in ways you never envisioned. In this session, I’ll address challenges like sandboxing at scale, aiding in end user integrations, and managing the infinite number of dependencies larger APIs may encounter.

How Low Can You Go? Reducing Costs and Development Time with AWS Lambda, API Gateway, Elastic Beanstalk, and 3scale (and don’t forget Swagger!) | 9:05 am – 9:30 am
Erin McKean, Founder, Wordnik
More than 18,000 developers have keys to the Wordnik API, which they use to build everything from edtech applications to word games to Twitterbots! Our original architecture has served more than 2bn calls since 2010, and has largely been a free service. When Wordnik relaunched as a nonprofit in late 2014, we realized we had to monetize our API to create a sustainable foundation for future development, and in order to monetize, we needed to reduce our operating costs and introduce new features — neither of which our previous architecture made simple. Enter AWS Lambda+API Gateway! By breaking down our current API into tiny functions, we can take advantage of the current microservice craze with minimal ops headaches. And by using Elastic Beanstalk and 3Scale to manage a reverse proxy, handling routing between the old and new API calls and managing billing is much easier, too. Add a pretty Swagger interface on top and you”re good to go … faster and cheaper!

November 4th

Testing your APIs with Cucumber | 11:15 am – 12:45 pm
Ole Lensmar CTO, SmartBear Software
BDD with Cucumber is gaining in popularity as it allows the expression of requirements using a custom domain-specific language that can be executed as tests to validate actual implementations. APIs in particular seem to have a lot to gain from this approach as they are often technical in nature; it seems that a correctly performed Cucumber implementation should allow increasingly non-technical stakeholders to participate in defining the requirements for an APIs behaviour and functionality. But how “non-technical” can cucumber vocabularies be made to describe something as inherently technical as APIs? When does it make sense to use imperative vs declarative approaches? How do you translate simple language to complex payloads and validations? How does the usage of Cucumber scale as the complexity of an API increases? And how can standards like Swagger make cucumber testing even more intuitive? I know you’re dying to find out – so don’t miss this opportunity to adopt the path of API Gherkin Nirvana!

OpenAPI Trek Beyond API Documentation | 2:00 pm – 3:30 pm
Arnaud Lauret IT Architect, AXA Banque
OpenAPI offers many possibilities that span the full API lifecycle, yet it is seen purely as a solution for generating API documentation. This session will tell the story of AXA Banque’s evolution from .doc and .pdf API documentation to the extensive use of OpenAPI Specification (formerly Swagger). Throughout the journey, we will identify the many advantages of API definition languages beyond simply generating API documentation, including design, testing, documentation continuous delivery, code generation, mocking, and prototyping new ideas.

RESTing on the Shoulders of a Giant: How Capital One Builds Its APIs | 2:00 pm – 3:30 pm
Abdelmonaim Remani, Engineering Tech Lead, Capital One DevExchange
Since it was first introduced, the REST architectural style was plagued with a high degree of vagueness and a lot of ambiguity. The absence of a concrete reference implementation early on left it up to the most popular web APIs to establish themselves as the gold standard. To win the hearts and minds of every last web developer, the leading tech companies invested greatly in building RESTful APIs, and put tremendous resources behind evangelizing their own individual flavors of REST. Capital One is no exception to the rule. As counter-intuitive as it may seem to be for a financial institution of its caliber, and as challenging as it is being a player in an industry locked into archaic technology and restrained by regulations, Capital One is fully engaged in a company wide initiative to expose financial services through internal and public-facing APIs implemented in the latest and greatest open source technologies. This talk is about how Capital One does REST, where the stakes are much higher and the risks are well beyond giving customers the wrong turn-by-turn directions.

Full Lifecycle Tool Support for APIs | 2:00 pm – 3:30 pm
Steven Fonseca, Principal Services Architect, Intuit
The focus of the talk is showcase an internal Intuit tool called API Lifecycle Manager, a web app and set of APIs that enables the IT organization to produce APIs efficiently across their lifecycle from conception to retirement. The audience will be provided a glimpse into how Intuit IT coordinates the build out of a strategic portfolio of APIs including their functional allocation and service ownership, comprehensive contract documentation with innovations not found in current industry modeling languages, documentation generation and delivery. Examples of API Lifecycle Manager will be shown in the context of a story of how IT is enabling Intuit product to deliver delightful customer experience, particularly in the areas of subscription-based billing and commerce.

Join us for the OpenAPI Spec Workshop

By Blog

The Open API Initiative and Capital One are co-hosting a OpenAPI Spec Crash Course as part of the 2016 APISTRAT Conference.

Register here.

Part I: Introduction to the Open API Specification

This workshop will provide an introduction to the new Open API Format for specifying APIs. The format is the core specification of the Linux Foundation’s Open API Initiative and is the most widely used API definition formation. The workshop will include:

  • An overview of the Open API Format and it’s origins in the Swagger specification.
  • Details of when, where and how to use the OAI format with speakers covering live examples.
  • A high level overview of where the format is headed.


Part II: Deep Dive into the Open API Specification

The second portion of this workshop provides a deep dive into the Open API format for users and those evaluating the format. The format is the core specification of the Linux Foundation’s Open API Initiative and is the most widely used API definition formation. Topics covered in this portion of the workshop include:

  • A technical deep dive into the Open API Specification including worked examples.
  • An overview of tools and services which use the format.
  • A session on the upcoming v3 format and what it is likely to contain.
  • Information on how to get involved in the OAI and contribute to the Open API Specification.


Request Parameters: Explaining the 3.0 spec, part 3

By Blog

With the version 3.0 of the OpenAPI Specification nearing a beta candidate, this series of posts is meant to provide insight into what is changing and how from the perspective the Technical Developer Community (TDC). The first post described the background and rationale behind the next evolution of the spec, the second covered Structural Changes, and the next few posts will address Protocol, Documentation, and other remaining open items.

Request Parameters

In OpenAPI 2.0, all the pieces of the request message that can vary, including URL parameters, headers, and body, were described as a set of typed parameters. Experience has shown that mapping the description of a HTTP request body into the same set of metadata as query and header parameters presents a number of challenges. Today, let’s break down how this will affect the request body, content objects, and cookie parameters.

Request Body

A new property requestBody has been added to the operation object. As a named property, this provides better distinction from and simplifies the parameter object, plus it makes it easier to describe the request body itself.

Content Objects

OpenAPI 2.0 established a complex relationship between:

  1. Where response media types are declared
  2. Where response schemas and examples are defined

Multiple response media types could be defined globally but optionally overridden at the operation level. However, that provided for one schema to be defined per response object, though examples could be defined by media type, and/or one per schema. This made it difficult to define different schemas for different media types and to use different media types for different response objects.

To address this, the content object introduces a simple relationship between response objects, media types, and schemas. Each response object contains a single content type object for each supported media type. Each content type object has a single schema and an array of examples for that media type.

The content object is also used for the request body and works identically for describing the inbound payload. This has the magic result of removing the need for the produces and consumes arrays. Discussions are ongoing about whether to also leverage content objects to describe complex URL parameters and response header values.

These changes result in a path item with this structure:


Cookie Parameter

While TDC members unanimously agreed that while using cookies is not a best practice for passing parameters, enough people had expressed the desire to describe existing cookied-based APIs that it warranted the inclusion of a new parameter type.

In the next post we will cover new interaction patterns to be supported and some planned changes for payload descriptions.

TDC: Structural Improvements: explaining the 3.0 spec, part 2

By Blog

With the version 3.0 of the OpenAPI Specification nearing a beta candidate, this series of posts is meant to provide insight into what is changing and how. The first post described the background and rationale behind the next evolution of the spec, and the next few posts will address the progress made by the Technical Developer Community (TDC) so far.

In an effort to organize the work, six omnibus meta-issues have been created:

  1. Structural improvements
  2. Request Parameters
  3. Protocol and Payload
  4. Documentation
  5. Security
  6. Path definitions

Over the next couple of weeks, we will describe each in order. Today, we’ll cover…

Structural Improvements

The next version of OpenAPI will have some pretty significant changes—in semantic versioning terminology, this will represent a major change from 2.0 to 3.0. Since breaking change events happen rarely, they present the opportunity to make sweeping structural improvements.

Most importantly, with the OpenAPI Specification version 3.0, the overall structure of the document has been simplified:


New Version Identifier

One obvious place to begin the transition from the description that was once called Swagger 2.0 and is now known as the OpenAPI Specification? The version property that was called swagger from 2.0 will be replaced by an openapi version identifier. Going forward this version identifier will follow the conventions of semantic versioning, and therefore it will have three parts: major.minor.patch, which likely means 3.0.0. This also explains why the value is a string rather than a number, and it will allow for more controlled and identifiable changes to the specification in the future.

Components Objects

OpenAPI 2.0 was somewhat inconsistent in the behavior of root-level properties. For example, some properties contained metadata that was applied globally to the API, while other properties were used as containers for reusable fragments of metadata to be referenced elsewhere. In order to clarify this and to minimize the number of properties at the root level, a new components property will be introduced. This components property contains only reusuable metadata that will be referenced elsewhere in the document.


Multiple Hosts

OpenAPI 2.0 allowed specifying a single host and basePath, and yet the schemes attribute allows specifying both http and https, therefore effectively enabling two hosts that only vary in the scheme. In the OpenAPI.vNext, the working branch of the spec repo, a new root level hosts object contains an array of objects that contain host, basePath, and scheme properties. By structuring this as an array of objects, any number of root URLs for the API can be supported, and it allows for a clearer correlation of the scheme, host, and basePath properties. It also reduces the number of root level properties required, simplifying the document structure.


During the discussions around the GitHub issues and associated pull request, the TDC addressed the question of whether paths might be identified as representing different environments, such as dev, test, and production. However, this would have suggested that different hosts might point to different API implementations, and that was not the intent behind supporting multiple root URLs for the API. Rather, the goal was to allow a set of aliases to be defined for the same API. (Note: there remains an open issue concerning parameterization of the host and basePath, which might allow for pointing to different environments.)

Additionally, the host, basePath, and scheme may be overriden at the path item level. This should make it easier to incorporate functionality provided on a separate host into an API description.


More Descriptive Options

The new specification allows users to describe their APIs in a more resource-oriented manner. Previously, descriptions of API behavior were defined at the operation level. For APIs designed in a resource-oriented way, documentation text would often read “GET a foo”, “POST a foo”, “DELETE a foo”. If the purpose of “foo” needed to be elaborated upon, it became necessary to somewhat duplicate that text for each operation. Now a Path Item Object can contain both a short summary text and a longer description text. The choice to provide additional description at the operation level is left up to the user, based on whether further explanation is required.

Examples Object

The options for describing examples have been significantly expanded. The previous specification indicated that examples could only be described by a JSON or YAML object. Now, by using a JSON string, any format of example can be described. Additionally, a $ref object can be used to point to external files containing examples. The exact method of structuring examples is still in flux and may depend on whether the proposed content object is accepted by the TDC. The content object contains an array of example objects for defining a one or more examples for each media type.

As you can see, this one meta-issue about the structural changes has a lot to digest. The next post will discuss changes to how requests are described in OpenAPI 3.0.

  • Previous post in this series Part 1 – Background and how to get involved!


Darrell MillerAbout The Author

Darrel Miller
Darrel Miller is a Senior Software Development Engineer on Azure API Management for Microsoft. Darrel is a member of the OpenAPI Specification Technical Developer Community. You can follow him on Twitter or on his blog Bizcoder.