Bazel Blog

Google Summer of Code 2017

Thank you very much to everyone who applied for Google Summer of Code with Bazel. We received many interesting proposals, and we are excited to see that so many of you are enthusiastic about Bazel. Since this is the first Google Summer of Code with Bazel, we decided to mentor only one student. Of course, you are all welcome to contribute to our projects, even if it is outside of Google Summer of Code.

Harmandeep is going to work with us on Bazel this summer, and will develop a tool to provide editor services (e.g. code completion) for BUILD and .bzl files, using the Microsoft Language Server Protocol. For more information, you can check the proposal and follow Harmandeep's blog.

A big thank you to everyone who applied!

By Laurent Le Brun

Bazel 0.5.0 Released

We are delighted to announce the 0.5.0 release of Bazel (follow the link for the full release notes and list of changes).

This release simplifies Bazel installation on Windows and platforms where a JDK is not available. It solidifies the Build Event Protocol and Remote Execution APIs.

Note: Bazel release 0.5.0 contains a bug in the compiler detection on macOS which requires Xcode and the iOS tooling to be installed (corresponding issue #3063). If you had Command Line Tools installed, you also need to switch to Xcode using sudo xcode-select -s /Applications/

Improvements from our roadmap

Bundled JDK

As announced earlier, when using an install script, bazel now comes by default bundled with JDK 8. This means fewer steps required to install Bazel. Read more about JDK 7 deprecation in the related blog post.

Windows support: now in beta

Bazel on Windows is now easier to install: it is no longer linked with MSYS. A following blog post will detail this further. Bazel is now able to build Java, C++ and Python on Windows.

Build Event Protocol

The Build Event Protocol is now available as an experimental option; it enables programmatic subscription to Bazel's events (build started, action status, target completed, test results…). Currently, the protocol can only be written to a file. A gRPC transport is already in the works and will be added in the next minor release. The API will be stabilized in 0.5.1.

Coverage support for pure Java targets

Use bazel coverage //my:target to generate coverage information from a java_test.

Other major changes since 0.4.0

New rules

New rules in Bazel: proto_library, java_lite_proto_library, java_proto_library and cc_proto_library.

New Apple rules

There is a new repository for building for Apple platforms: These rules replace the deprecated iOS/watchOS rules built into Bazel. By rebuilding the rules from the ground up in Skylark and hosting them separately, we can more quickly fix bugs and implement new Apple features and platform versions as they become available.

Android Support Improvements

  • Integration with the Android Support Repository libraries in android_sdk_repository.
  • Support for Java 8 in Android builds with --experimental_desugar_for_android. See Android Studio's documentation for more details about Android's Java 8 language features.
  • Multidex is now fully supported via android_binary.multidex.
  • android_ndk_repository now supports Android NDK 13 and NDK 14.
  • APKs are now signed with both APK Signature V1 and V2. See Android documentation for more details about APK Signature Scheme v2.

Remote Execution API

We fixed a number of bugs in the Remote Execution implementation. The final RPC API design has been sent to bazel-discuss@ for discussion (see Design Document: Remote Execution API) and it should be finalized in the 0.6.0 release. The final API should only be a minor change compared to the implementation in this 0.5.0 release.


  • Declared Providers are now implemented and documented. They enable more robust and clearly defined interfaces between different rules and aspects. We recommend using them for all rules and aspects.
  • The type formerly known as 'set' is now called 'depset'. Depsets make your rules perform much better, allowing rules memory consumption to scale linearly instead of quadratically with build graph size - make sure you have read the documentation on depsets.


A big thank you to our community for your continued support. Particular shout-outs to Peter Mounce for the Chocolatey Windows package and Yuki Yugui Sonoda for maintaining rules_go (they both received an open source peer bonus from Google).

Thank you all, keep the questions and bug reports coming!

See the full list of changes on GitHub.

JDK7 deprecation

The Bazel team has been maintaining a separate, stripped-down build of Bazel that runs with JDK 7. The 0.5.1 release will no longer provide this special version.

To address the problem of JDK 8 not being available on some machines, starting with version 0.5.0, our installer will embed a JDK by default.

If you have any concerns, please reach out to



  • default version, with embedded JDK.
  • version without embedded JDK.
  • last release compatible with JDK 7.


  • default version, with embedded JDK.
  • version without embedded JDK.

Migration path:

If you are currently using the Bazel with JDK 7, then starting with version 0.5.0 you must start using the default installer.

If you are currently using the default installer and do not want to use a version with embedded JDK, then use the -without-jdk version.


Homebrew and debian packages do not contain the embedded JDK. This change only affects the shell installers.


Thanks everybody for bearing with all the JDK 7 related issues, including the Java team at Google, in particular Liam Miller-Cushon.

Special thanks to Philipp Wollermann who made this new installer possible.

A glimpse of the design of Skylark

This blog post describes the design of Skylark, the language used to specify builds in Bazel.

A brief history

Many years ago, code at Google was built using Makefiles. As other people noticed, Makefiles don't scale well with a large code base. A temporary solution was to generate Makefiles using Python scripts, where the description of the build was stored in BUILD files containing calls to the Python functions. But this solution was way too slow, and the bottleneck was Make.

The project Blaze (later open-sourced as Bazel) was started in 2006. It used a simple parser to read the BUILD files (supporting only function calls, list comprehensions and variable assignments). When Blaze could not directly parse a BUILD file, it used a preprocessing step that ran the Python interpreter on the user BUILD file to generate a simplified BUILD file. The output was used by Blaze.

This approach was simple and allowed developers to create their own macros. But again, this led to lots of problems in terms of maintenance, performance, and safety. It also made any kind of tooling more complicated, as Blaze was not able to parse the BUILD files itself.

In the current iteration of Bazel, we've made the system saner by removing the Python preprocessing step. We kept the Python syntax, though, in order to migrate our codebase. This seems to be a good idea anyway: Many people like the syntax of our BUILD files and other build tools (e.g. Buck, Pants, and Please) have adopted it.

Design requirements

We decided to separate description of the build from the extensions (macros and rules). The description of the build resides in BUILD files and the extensions reside in .bzl files, although they are all evaluated with the same interpreter. We want the code to be easy to read and maintain. We designed Bazel to be used by thousands of engineers. Most of them are not familiar with build systems internals and most of them don't want to spend time learning a new language. BUILD files need to be simple and declarative, so that we can build tools to manipulate them.

The language also needed to:

  • Run on the JVM. Bazel is written in Java. The data structures should be shared between Bazel and the language (due to memory requirements in large builds).

  • Use a Python syntax, to preserve our codebase.

  • Be deterministic and hermetic. We have to guarantee that the execution of the code will always yield the same results. For example, we forbid access to I/O and date and time, and ensure deterministic iteration order of dictionaries.

  • Be thread-safe. We need to evaluate a lot of BUILD files in parallel. Execution of the code needs to be thread-safe in order to guarantee determinism.

Finally, we have performance concerns. A typical BUILD file is simple and can be executed quickly. In most cases, evaluating the code directly is faster than compiling it first.

Parallelism and imports

One special feature of Skylark is how it handles parallelism. In Bazel, a large build require the evaluation of hundreds of BUILD files, so we have to load them in parallel. Each BUILD file may use any number of extensions, and those extensions might need other files as well. This means that we end up with a graph of dependencies.

Bazel first evaluates the leaves of this graph (i.e. the files that have no dependencies) in parallel. It will load the other files as soon as their dependencies have been loaded, which means the evaluation of BUILD and .bzl files is interleaved. This also means that the order of the load statements doesn't matter at all.

Each file is loaded at most once. Once it has been evaluated, its definitions (the global variables and functions) are cached. Any other file can access the symbols through the cache.

Since multiple threads can access a variable at the same time, we need a restriction on side-effects to guarantee thread-safety. The solution is simple: when we cache the definitions of a file, we "freeze" them. We make them read-only, i.e. you can iterate on an array, but not modify its elements. You may create a copy and modify it, though.

In a future blog post, we'll take a look at the other features of the language.

By Laurent Le Brun

Skylark and Java rules interoperability

As of Bazel 0.4.4, Java compilation is possible from a Skylark rule. This facilitates the Skylark and Java interoperability and allows creating what we call Java sandwiches in Bazel.

What is a Bazel Java sandwich?

A Java sandwich refers to custom rules written in Skylark being able to depend on Bazel native rules (e.g. java_library) and the other way around. A typical Java sandwich in Bazel could be illustrated like this:

java_library(name = "top", ...)
java_skylark_library(name = "middle", deps = [":top", ...], ...)
java_library(name = "bottom", deps = [":middle", ...], ...)

Built-in support for Java

In Skylark, an interface to built-in Java functionality is available via the java_common module. The full API can be found in the documentation.


Compiles Java source files/jars from the implementation of a Skylark rule and returns a java_common.provider that encapsulates the compilation details.


Merges the given providers into a single java_common.provider.


To allow other Java rules (native or custom) to depend on a Skylark rule, the Skylark rule should return a java_common.provider. All native Java rules return java_common.provider by default, which makes it possible for any Java related Skylark rule to depend on them.

For now, there are 3 ways of creating a java_common.provider:

  1. The result of java_common.compile.
  2. Fetching it from a Java dependency.
  3. Merging multiple java_common.provider instances using java_common.merge.

Using the Java sandwich with compilation example

This example illustrates the typical Java sandwich described above, that will make use of Java compilation:

java_library(name = "top", ...)
java_skylark_library(name = "middle", deps = [":top", ...], ...)
java_library(name = "bottom", deps = [":middle", ...], ...)

In the BUILD file we load the Skylark rule and have the rules:

load(':java_skylark_library.bzl', 'java_skylark_library')

  name = "top",
  srcs = [""],
  deps = [":middle"]

  name = "middle",
  srcs = [""],
  deps = [":bottom"]

  name = "bottom",
  srcs = [""]

The implementation of java_skylark_library rule does the following:

  1. Collects all the java_common.providers from its dependencies and merges them using java_common.merge.
  2. Creates an artifact that will be the output jar of the Java compilation.
  3. Compiles the specified Java source files using java_common.compile, passing as dependencies the collected java_common.providers.
  4. Returns the output jar and the java_common.provider resulting from the compilation.
def _impl(ctx):
  deps = []
  for dep in ctx.attr.deps:
    if java_common.provider in dep:

  output_jar = ctx.new_file("lib" + + ".jar")

  compilation_provider = java_common.compile(
    source_files = ctx.files.srcs,
    output = output_jar,
    javac_opts = [],
    deps = deps,
    strict_deps = "ERROR",
    java_toolchain = ctx.attr._java_toolchain,
    host_javabase = ctx.attr._host_javabase
  return struct(
    files = set([output_jar]),
    providers = [compilation_provider]

java_skylark_library = rule(
  implementation = _impl,
  attrs = {
    "srcs": attr.label_list(allow_files=True),
    "deps": attr.label_list(),
    "_java_toolchain": attr.label(default = Label("@bazel_tools//tools/jdk:toolchain")),
    "_host_javabase": attr.label(default = Label("//tools/defaults:jdk"))
  fragments = ["java"]

Just passing around information about Java rules example

In some use cases there is no need for Java compilation, but rather just passing information about Java rules around. A Skylark rule can have some other (irrelevant here) purpose, but if it is placed somewhere between two Java rules it should not lose information from bottom to top.

In this example we have the same Bazel sandwich as above:

java_library(name = "top", ...)
java_skylark_library(name = "middle", deps = [":top", ...], ...)
java_library(name = "bottom", deps = [":middle", ...], ...)

only that java_skylark_library won't make use of Java compilation, but will make sure that all the Java information encapsulated by the Java library bottom will be passed on to the Java library top.

The BUILD file is identical to the one from the previous example.

The implementation of java_skylark_library rule does the following:

  1. Collects all the java_common.providers from its dependencies
  2. Returns the java_common.provider that resulted from merging the collected dependencies.
def _impl(ctx):
  deps = []
  for dep in ctx.attr.deps:
    if java_common.provider in dep:
  deps_provider = java_common.merge(deps)
  return struct(
    providers = [deps_provider]

java_skylark_library = rule(
  implementation = _impl,
  attrs = {
    "srcs": attr.label_list(allow_files=True),
    "deps": attr.label_list(),
    "_java_toolchain": attr.label(default = Label("@bazel_tools//tools/jdk:toolchain")),
    "_host_javabase": attr.label(default = Label("//tools/defaults:jdk"))
  fragments = ["java"]

More to come

Right now there is no way of creating a java_common.provider that encapsulates compiled code (and its transitive dependencies), other than java_common.compile. For example one may want to create a provider from a .jar file produced by some other means.

Soon there will be support for use cases like this. Stay tuned!

If you are interested in tracking the progress on Bazel Java sandwich you can subscribe to this Github issue.

Irina Iancu, on behalf of the Bazel Java team

A Google Summer of Code with Bazel

I'm happy to announce that Bazel has been accepted as a mentor organization for the Google Summer of Code 2017. If you are a student and interested in working on Bazel this summer, please read on.

Take a look at our ideas page: it is not exhaustive and we may extend it over time, but it should give you a rough idea of what you could work on. Feel free to come up with your new ideas or suggest variations on our proposals. Not all projects on the page will be taken: we expect to accept up to three students. Students will not work with a single mentor, you can expect to interact with multiple people from the Bazel team (although there will be a main contact point). This will ensure you'll get timely responses and assistance, even if one of us goes on vacation.

This is the first time we participate in Google Summer of Code, please bear with us if you miss some information. We will update our ideas page to answer the most frequent questions.

If you have any question, please contact us on

I'm looking forward to hearing from you,

Laurent Le Brun, on behalf of the Bazel mentors.

Protocol Buffers in Bazel

Bazel currently provides built-in rules for Java, JavaLite and C++.

proto_library is a language-agnostic rule that describes relations between .proto files.

java_proto_library, java_lite_proto_library and cc_proto_library are rules that "attach" to proto_library and generate language-specific bindings.

By making a java_library (resp. cc_library) depend on java_proto_library (resp. cc_proto_library) your code gains access to the generated code.

TL;DR - Usage example

TIP: contains a buildable example.

NOTE: Bazel 0.4.4 lacks some features the example uses - you'll need to build Bazel from head. The easiest is to install Bazel, download Bazel's source code, build it (bazel build //src:bazel) and copy it somewhere (e.g., cp bazel-bin/src/bazel ~/bazel)


Bazel's proto rules implicitly depend on the distribution (described below, in "Implicit Dependencies and Proto Toolchains"). The following satisfies these dependencies:

TIP: This is a shortened version of

# proto_library rules implicitly depend on @com_google_protobuf//:protoc,
# which is the proto-compiler.
# This statement defines the @com_google_protobuf repo.
    name = "com_google_protobuf",
    urls = [""],

# cc_proto_library rules implicitly depend on @com_google_protobuf_cc//:cc_toolchain,
# which is the C++ proto runtime (base classes and common utilities).
    name = "com_google_protobuf_cc",
    urls = [""],

# java_proto_library rules implicitly depend on @com_google_protobuf_java//:java_toolchain,
# which is the Java proto runtime (base classes and common utilities).
    name = "com_google_protobuf_java",
    urls = [""],

BUILD files

TIP: This is a shortened version of

    name = "person_java_proto",
    deps = [":person_proto"],

    name = "person_cc_proto",
    deps = [":person_proto"],
    name = "person_proto",
    srcs = ["person.proto"],
    deps = [":address_proto"],

    name = "address_proto",
    srcs = ["address.proto"],
    deps = [":zip_code_proto"],

    name = "zip_code_proto",
    srcs = ["zip_code.proto"],

This file yields the following dependency graph:

proto_library dependency graph

Notice how the proto_library provide structure for both Java and C++ code generators, and how there's only one java_proto_library even though there multiple .proto files.


... in comparison with a macro that's responsible for compiling all .proto files in a project.

  1. Caching + incrementality: changing a single .proto only causes the rebuilding of dependant .proto files. This includes not only regenerating code, but also recompiling it. For large proto graphs this could be significant.
  2. Depend on pieces of a proto graph from multiple places: in the example above, one can add a cc_proto_library that deps on zip_code_proto, and including it together with //src:person_cc_proto in the same project. Though they both transitively depend on zip_code_proto, there won't be a linking error.

Recommended Code Organization

  1. One proto_library rule per .proto file.
  2. A file named foo.proto will be in a rule named foo_proto, which is located in the same package.
  3. A X_proto_library that wraps a proto_library named foo_proto should be called foo_X_proto, and be located in the same package.


Q: I already have rules named java_proto_library and cc_proto_library. Will there be a problem?
A: No. Since Skylark extensions imported through load statements take precedence over native rules with the same name, the new rule should not affect existing usage of the java_proto_library macro.

Q: How do I use gRPC with these rules?
A: The Bazel rules do not generate RPC code since protobuf is independent of any RPC system. We will work with the gRPC team to create Skylark extensions to do so. (C++ Issue, Java Issue)

Q: Do you plan to release additional languages?
A: We can relatively easily create py_proto_library. Our end goal is to improve Skylark to the point where these rules can be written in Skylark, making them independent of Bazel.

Q: How does one use well-known types? (e.g., any.proto, descriptor.proto)
A: Once is resolved, the following should be added to a .proto file: import google/protobuf/any.proto and the following: @com_google_protobuf//:well_known_types_protos to one's proto_library rule.

Q: Any tips for writing my own such rules?
A: First, make sure you're able to register actions that compile your target language. (as far as I know, Bazel Python actions are not exposed to Skylark, for example).
Second, take extra care to generate unique symbol names and unique filenames. There's an implicit assumption that different proto rules with different options, generate different symbols. For example, if you write a new rule foo_java_proto_library, it must not generate symbols that java_proto_library might. The risk is that a binary will contain both, leading to a one-definition rule violation (e.g., linking errors). The downside is that the binary might bloat, as it must contain multiple generated code for the same proto. We're working on a Skylark version of java_lite_proto_library which should provide a good example.

Implementation Details

Implicit Dependencies and Proto Toolchains

The proto_library rule implicitly depends on @com_google_protobuf//:protoc, which is the protocol buffer compiler. It must be a binary rule (in protobuf, it's a cc_binary). The rule can be overridden using the --proto_compiler command-line flag.

X_proto_library rules implicitly depend on @com_google_protobuf_X//:X_toolchain, which is a proto_lang_toolchain rule. These rules can be overridden using the --proto_toolchain_for_X command-line flags.

A proto_lang_toolchain rule describes how to call the protocol compiler, and what is the library (if any) that the resulting generated code needs to compile against. See an example in the protobuf repository.

Bazel Aspects

The X_proto_library rules are implemented using Bazel Aspects to have the best of two worlds -

  1. Only need a single X_proto_library rule for an arbitrarily-large proto graph.
  2. Incrementality, caching and no linking errors.

Conceptually, an X_proto_library rule creates a shadow graph of the proto_library it depends on, and each shadow node calls protocol-compiler and then compiles the generated code. This way, if there are multiple paths from a rule to a proto_library through X_proto_library, they all share the same node.

Descriptor Sets

When compiled on the command-line, a proto_library creates a descriptor set for the messages it srcs. The file is a serialized FileDescriptorSet, which is described in

One use case for the descriptor set is generating code without having to parse .proto files. ( tracks this ability in the protobuf compiler)

The aforementioned file only contains information about the .proto files directly mentioned by a proto_library rule; the collection of transitive descriptor sets is available through the 'proto.transitivedescriptorsets' Skylark provider. See documentation in ProtoSourcesProvider.

By Carmi Grushko

Invalidation of repository rules

Remote repositories are the way to use dependencies from "outside" of the Bazel world in Bazel. Using them, you can download binaries from the internet or use some from your own host. You can even use Skylark to define your own repository rules to depend on a custom package manager or to implement auto-configuration rules.

This post explains when Skylark repositories are invalidated and hence when they are executed.


The implementation attribute of the repository_rule defines a function (the fetch operation) that is executed inside a Skyframe function. This function is executed when one of its dependencies change.

For repository that are declared local (set local = True in the call to the repository_rule function), the fetch operation is performed on every call of the Skyframe function.

Since a lot of dependencies can trigger this execution (if any part of the WORKSPACE file change for instance), a supplemental mechanism ensure that we re-execute the fetch operation only when stricly needed for non-local repository rules (see the design doc for more details).

After is released, Bazel will re-perform the fetch operation if and only if any of the following dependencies change:

  • Skylark files needed to define the repository rule.
  • Declaration of the repository rule in the WORKSPACE file.
  • Value of any environment variable declared with the environ attribute of the repository_rule function. The value of those environment variable can be enforced from the command line with the --action_env flag (but this flag will invalidate every action of the build).
  • Content of any file used and referred using a label (e.g., //mypkg:label.txt not mypkg/label.txt).

Good practices regarding refetching

Declare your repository as local very carefully

First and foremost, declaring a repository local should be done only for rule that needs to be eagerly invalidated and are fast to update. For native rule, this is used only for local_repository and new_local_repository.

Put all slow operation at the end, resolve dependencies first

Since a dependency might be unresolved when asked for, the function will be executed up to where the dependency is requested and all that part will be replayed if the dependency is not resolved. Put those file dependencies at the top, for instance prefer

def _impl(repository_ctx):
   repository_ctx.file("BUILD", repository_ctx.attr.build_file)"BIGFILE", sha256 = "...")

myrepo = repository_rule(_impl, attrs = {"build_file": attr.label()})


def _impl(repository_ctx):"BIGFILE")
   repository_ctx.file("BUILD", repository_ctx.attr.build_file)

myrepo = repository_rule(_impl, attrs = {"build_file": attr.label()})

(in the later example, the download operation will be re-executed if build_file is not resolved when executing the fetch operation).

Declare your environment variables

To avoid spurious refetch of repository rules (and the impossibility of tracking all usages of environmnent variables), only environment variables that have been declared through the environ attribute of the repository_rule function are invalidating the repositories.

Therefore, if you think you should re-run if an environment variable changes (like for auto-configuration rules), you should declare those dependencies, or your user will have to do bazel clean --expunge each time they change their environment.

We are now!

As you might have seen either in our 0.4 announcement or simply by going to our website, we have recently switched over to the domain name.

We decided to switch over to the new .build top-level domain, which reflects what Bazel is for: building!

Our old domain,, will redirect to for the forseenable future.

Bazel 0.4.0 Released

We are delighted to announce the 0.4.0 release of Bazel. This release marks major improvements in support for Windows, sandboxing and performance.

We are also moving also moving to a new domain. is already up and running and we are slowly moving every reference to point to that new domain..

Improvements from our roadmap

Java workers are now used by default

Java workers reduce the time of java compilation by keeping a hot JVM in the background. It improves java compilation speed by 5x and we decided to make it the default.

Sandboxing is now working also on OS X

With our Beta release, we added sandboxing of action to Linux. This feature ensured that the rule set does not access undeclared input, allowing correct execution of actions. We leveraged the sandbox-exec command to generate a similar sandbox on OS X.

Other major changes since 0.3.0

We provide Bazel binaries for Windows

As announced in our Bazel on Windows blog post, we are now providing binary distribution of Bazel for Windows. A chocolatey package was contributed by Peter Mounce so you can just do choco install bazel to get Bazel installed on Windows, big thanks to Peter Mounce! This release also marks a big step for us: TensorFlow PIP package can now be built on Windows with Bazel!

Skylark implementation of repository rules

We now have implementation of two repository rules (gitrepository and mavenjar) in Skylark and we recommend using them instead of the native ones, to do so simply adds the following lines at the top of your WORKSPACE file:

load("@bazel_tools//tools/build_defs/repo:maven_rules.bzl", "maven_jar")

And various more

  • The --watchfs flag is ready to be turned on by default. It improves performance of Bazel, try it out!
  • The Linux sandbox got revamped for better performance and usability: no performance hit should be perceptible and accessing system tools should be possible.

For changes since 0.3.2 (the minor version before 0.4.0), see the release notes for changes.

Future plans

Looking ahead to 0.5.0:

  • With the help of your feedback, we will resolve the last issue to make our Windows port work seamlessly for Java, C++ and Python.
  • The new distributed execution API will be stabilized.


A big thank you to our community for your continued support. Particular shout-outs to the following contributors:

Thank you all, keep the discussion and bug reports coming!

IntelliJ and Android Studio support

We've recently open-sourced Bazel plugins for IntelliJ and Android Studio.

Key Features

  • Import a project directly from a BUILD file.
  • BUILD file integration: syntax highlighting, refactoring, find usages, code completion, etc. Skylark extensions are fully supported.
  • Compile your project and get navigatable Bazel compile errors in the IDE.
  • Buildifier integration
  • Support for Bazel run configurations for certain rule classes.
  • Run/debug tests directly through Bazel by right-clicking on methods/classes/BUILD targets.

How do I get started?

To try them out, you can install them directly from within the IDE (Settings > Plugins > Browse repositories), download them from the JetBrains plugin repository, or build directly from source.

Detailed docs are available here.

The plugins are currently Linux-only, with plans for Mac support in the future.

Bazel on Windows

We first announced experimental Windows support in 0.3.0. Since then, we've implemented support for building, running and testing C++, Java and Python, as well as improved performance and stability. Starting with Bazel version 0.3.2, we are making prebuilt Bazel Windows binaries available as part of our releases (installation instructions).

In addition to bootstrapping Bazel itself, we're also able to build significant parts of TensorFlow with Bazel on Windows (pull request). Bazel on Windows currently requires msys2 and still has a number of issues. Some of the more important ones are:

Our GitHub issue tracker has a full list of known issues.

Now, we need your help! Please try building your Bazel project on Windows, and let us know what works or what doesn't work yet, and what we can do better.

We are looking forward to what you build (on Windows)!

IDE support

One of Bazel’s longest-standing feature requests is integration with IDEs. With the 0.3 release, we finally have all machinery in place that allows implementing integration with Bazel in IDEs. Simultaneous with that Bazel release we are also making public two IDE plugins:

  • Tulsi: Bazel support for Xcode.
  • e4b: a sample Bazel plugin for Eclipse.

In this post, we will look into how Bazel enables IDE integration and how an IDE plugin integrating with Bazel can be implemented.

Principles of Bazel IDE support

Bazel BUILD files provide a description of a project’s source code: what source files are part of the project, what artifacts (targets) should be built from those files, what the dependencies between those files are, etc. Bazel uses this information to perform a build, that is, it figures out the set of actions needed to produce the artifacts (such as running a compiler or linker) and executes those actions. Bazel accomplishes this by constructing a dependency graph between targets and visiting this graph to collect those actions.

IDEs (as well as other tools working with source code) also need the same information about the set of sources and their roles; but instead of building the artifacts, IDEs use it to provide code navigation, autocompletion and other code-aware features.

In the 0.3.0 Bazel release, we are adding a new concept to Bazel - aspects. Aspects allow augmenting build dependency graphs with additional information and actions. Applying an aspect to a build target creates a "shadow dependency graph" reflecting all transitive dependencies of that target, and the aspect's implementation determines the actions that Bazel executes while traversing that graph. The documentation on aspects explains this in more detail.

Architecture of a Bazel IDE plugin.

As an example of how aspects are useful for IDE integration, we will take a look at a sample Eclipse plugin for Bazel support, e4b.

e4b includes an aspect, defined in a file e4b_aspect.bzl, that when applied to a particular target, generates a small JSON file with information about that target relevant to Eclipse. Those JSON files are then consumed by the e4b plugin inside Eclipse to build Eclipse's representation of a project, IClasspathContainer:

e4bazel workflow

Through the e4b plugin UI, the user specifies an initial set of targets (typically a java or android binary, a selection of tests, all targets in certain packages, etc). E4b plugin then invokes bazel as follows:

bazel build //java/com/company/example:main \
--aspects e4b_aspect.bzl%e4b_aspect \
--output_groups ide-info

(some details are omitted for clarity; see e4b source for complete invocation)

The --aspects flag directs Bazel to apply e4b_aspect, exported from e4bazel.bzl Skylark extension, to target //java/com/company/example:main.

The aspect is then applied transitively to the dependencies of the specified targets, producing .e4b-build.json files for each target in the transitive closure of dependencies. The e4b plugin reads those outputs and provides a Classpath for Eclipse core to consume. If the input BUILD files change so that a project model needs to be re-synced, the plugin still invokes the exact same command: Bazel will rebuild only those files that are affected by the change, so the plugin need only reexamine only those newly built .e4b-build.json files. ide-info is an output group defined by e4b_aspect; the --output_groups flag ensures that only the artifacts belonging to that group (and hence only to the aspect) are built, and therefore that no unnecessary build steps are performed.

The aspect uses the java provider on the targets it applies to to access a variety of information about Java targets.

Bazel 0.3.0 Released

We are delighted to announce the 0.3.0 release of Bazel. This milestone is marked by support for IDE integration but also major features such as remote caching of build artifacts and experimental Windows support.

Improvements from our roadmap

IDE support

In this release, we made it possible to generate information for IDEs from Bazel build files using Skylark aspects.

Simultaneous with Bazel 0.3 release, we are announcing the availability of two projects integrating Bazel with different IDEs:

  • Tulsi is an Xcode plugin for Bazel. This is the same plugin that teams inside Google use for developing iOS applications.
  • e4b is an experimental Eclipse plugin for Bazel. It was made to illustrate the use of Skylark aspects for IDE integration. This is an experimental plugin but we welcome any contributions to it.

Windows support

Bazel can now bootstrap on Windows without admin privilege and can use the Microsoft Visual C++ toolchain. Windows support is still highly experimental and we have identified several issues and their solutions. We are dedicated to a good native experience on Windows.

Remote caching of distributed artifacts

Alpha Lam has contributed experimental support for distributed caching and execution. This is an ongoing area of development and several engineers from Google are working with Alpha to enhance that support.

Skylark remote repositories

Remote repository rules can now be created using Skylark. This can be used to support your custom protocols, interfacing with new packaging systems or even do auto-configuration to use a toolchain on your local disk. We use it especially to have a better out-of-the-box experience with C++ toolchains.

Other major changes since 0.2.0

  • We just open-sourced our BUILD file formatter, buildifier.
  • We now provide a Debian APT repository for installing bazel, see the installation guide on how to use it.
  • Our JUnit test runner for Java tests (--nolegacy_bazel_java_test) is now the default.

For changes since 0.2.3 (the minor version before 0.3.0), see the release notes for changes.

Future plans

Looking ahead to 0.4.0:

  • The last blockers for --strategy=worker=Javac will be resolved, making Java builds faster.
  • Yue has made great progress in making Ulf's prototype of sandboxing for OS X real.


A big thank you to our community for your continued support. Particular shout-outs to the following contributors:

  • Justine Tunney - for developing and maintaining the Closure JS rules of Bazel.
  • Alpha Lam - for implementing remote caching/execution and following up on these features.
  • David Chen - for going above and beyond, far more than a standard 20% contribution: improving our documentation, creating the Skylark documentation generator, fixing bugs and contributing features in Bazel and helping out TensorFlow with their use of Bazel.

Thank you all, keep the discussion and bug reports coming!

Using Skylark remote repositories to auto-detect the C++ toolchain.

Skylark remote repositories let you create custom external repositories using Skylark. This not only enables creating rules for custom package systems such as PyPi but also generating a repository to reflect the toolchain installed on the workstation Bazel is running on. We explain here how we implemented auto-configuration for the C++ toolchain.


C++ toolchain: the set of binaries and libraries required to build C++ code. Crosstool: a compiler capable of building for a certain architecture, which can be different from the host architecture (e.g., gcc running on Linux and building for Raspberry Pi).

C++ toolchains are configured in Bazel using a crosstool target and a CROSSTOOL file.

This crosstool target (:default_toolchain) is the first step in moving the contents of the CROSSTOOL file entirely into BUILD file rules. The CROSSTOOL file defines where to find the C++ compiler, its include directories and also the various flag to use at each compilation step.

When your C++ compiler is not in the standard location, then this static CROSSTOOL file cannot find it. To cope with the variety of installation out there, we created a cc_configure Skylark repository rule that will generates a @local_config_cc//tools/cpp package containing a generated CROSSTOOL file based on the information we gathered from the operating system.


The cc_configure rule is actually a macro wrapping the cc_autoconf enforcing the local_config_cc name for the repository. The implementation of the cc_autoconf rule does the following step:

So using the function provided by repository_ctx, we can discover the binaries on the system, what version they are, and which options they support, then generate a configuration to match the local C++ toolchain.

Creating your own repository rules

When creating a Skylark remote repository, a few things should be taken in considerations:

  • The Skylark implementation of a remote repository is run during the loading phase of the repository, which means that unless the rule definition is changed in the WORKSPACE file or the implementation fails, it will not be re-run unless the user does a bazel clean --expunge. We are thinking of further command to force re-run that loading phase for a specific remote repository (#974).
  • Skylark remote repository can do a lot of non hermetic operation, it is recommended to check as many things as possible to ensure hermeticity (and overall, we recommend using a vendored toolchain instead of using auto-detected one if reproducibility is important). For example, it is recommended to use the sha256 argument of the method.
  • Naming a rule can be complex and we recommend to not use standard suffix of classical rules for remote repositories (e.g. *_library or *_binary). If you create a package rule, a good name would probably be xxx_package (e.g., pypi_package). If you create an autoconfiguration rule, xxx_configure is probably the best name (e.g. cc_configure).

Easier Debugging of Sandbox Failures

We have often heard that debugging failed execution due to issues with the sandbox is difficult and requires knowledge of the sandboxing code of Bazel to actually understand what’s happening. With these changes, we hope that you will be able to solve common issues easier on your own and make debugging easier.

If you don’t know much about Bazel sandbox, you might want to read this blog post

What we did:

  • When using --verbose_failures and --sandbox_debug, Bazel now shows the detailed command that it ran when your build failed, including the part that sets up the sandbox.
  • When you copy & paste the shown command into a terminal, the failed command is rerun - but when it fails this time, we provide access to a shell inside a new sandbox which is the same as the old sandbox we made before, so that you can explore the sandbox from the inside and find out why the command failed.

How to use it:

Let’s say you wrote a Skylark rule and forgot to add your compiler to the input files. Before this change, when you ran bazel build, you would get several error messages like this:

ERROR: path/to/your/project/BUILD:1:1: compilation of rule '//path/to/your/project:all' failed: No such file or directory.
ERROR: /your/project/BUILD:x:1: Executing genrule //project/dir:genrule failed: bash failed: error executing command /path/to/your/compiler some command

But you probably have no idea what to do, because the error message is not detailed enough and you have everything you needed in your system.

With this new feature, you will get an error message like this instead:

ERROR: path/to/your/project/BUILD:1:1: compilation of rule '//path/to/your/project:all' failed:

Sandboxed execution failed, which may be legitimate (e.g. a compiler error), or due to missing dependencies. To enter the sandbox environment for easier debugging, run the following command in parentheses. On command failure, a bash shell running inside the sandbox will then automatically be spawned

namespace-sandbox failed: error executing command
  (cd /some/path && \
  exec env - \
    LANG=en_US \
    PATH=/some/path/bin:/bin:/usr/bin \
    PYTHONPATH=/usr/local/some/path \
  /some/path/namespace-sandbox @/sandbox/root/path/this-sandbox-name.params -- /some/path/to/your/some-compiler --some-params some-target)

Then you can simply copy & paste the command above in parentheses into a new terminal:

(cd /some/path && \
  exec env - \
    LANG=en_US \
    PATH=/some/path/bin:/bin:/usr/bin \
    PYTHONPATH=/usr/local/some/path \
  /some/path/namespace-sandbox @/sandbox/root/path/this-sandbox-name.params -- /some/path/to/your/some-compiler --some-params some-target)

There will be the same error message about not finding your compiler, but after that error message, you will find yourself in a bash shell inside a new sandbox. You can now debug the failure, e.g. you can explore the sandbox: look for any missing file, check for possible errors in your BUILD files, run your compiler again manually, or even use strace.

For this example, we run our compiler in the sandbox again manually and the error message shows No command ‘some-compiler’ found - looking around, you notice that the compiler binary is missing. This means it was not part of the action inputs, because Bazel always mounts all action inputs into the sandbox - so you check out your Skylark rule and notice that this is indeed the case. Adding your compiler to the input files in your Skylark rule should thus fix the error.

Next time you run bazel build, it should mount your compiler into the sandbox and thus find it correctly. If you get a different error, you could repeat the steps above.

Bazel 0.2.0 Released

We are delighted to announce the 0.2.0 release of Bazel. This release marks major improvements in support for external repositories, Skylark and testing, in particular how external repositories and Skylark can work together.

Improvements from our roadmap

Skylark rules can now be loaded from remote repositories

Skylark rules can now be loaded from a remote repository. For example, to use the Scala rules, add the following to your WORKSPACE file:

    name = "io_bazel_rules_scala",
    remote = "",
    tag = "0.0.1",
load("@io_bazel_rules_scala//scala:scala.bzl", "scala_repositories")

This will download all of the tools the rules need to build Scala programs.

Then load and use normally from your BUILD files:

load("@io_bazel_rules_scala//scala:scala.bzl", "scala_library")

We will gradually move the existing rules to their own repositories, announcing changes on the mailing list.

Go build and test support

There is now Go language support, see the documentation for details.

Open sourcing tests

We also open sourced over a hundred tests and laid the foundation for open sourcing more. We will continue to open source more tests (both to increase Bazel's stability and to make contributing easier), but this marks a dramatic increase in test coverage.

Other major changes since 0.1.0

  • The --package_path definition in .bazelrc is no longer required, nor is the base_workspace/ directory.
  • JUnit test runner for Java tests - Use the --nolegacy_bazel_java_test flag (soon to be the default) to get XML output for easy integration into CI systems and easier debugging with --java_debug.
  • Skylark macros can now be loaded and used in the WORKSPACE file.
  • Remote repository filesystem changes are tracked.
  • Debian packages and a Homebrew recipe.

For changes since 0.1.5 (the minor version before 0.2.0), see the release notes for changes.

Future plans

Looking ahead to 0.3.0:

  • Windows support is coming! (See the Windows label to follow the progress there).
  • Remote caching and execution is in progress (see Alpha Lam's work).
  • XCode integration and generic IDE support.
  • Ulf has been working on sandboxing for OS X, which will hopefully be available soon.
  • More work on parallelization. We currently have experimental support (which can be enabled with the --experimental_interleave_loading_and_analysis flag) which improves clean build time (~30% faster loading and analysis), especially for builds using a lot of select() expressions.


A big thank you to our community for your continued support. Particular shout-outs to the following contributors:

  • Brian Silverman - for tons of important bug fixes and answering lots of user questions.
  • Alpha Lam - for writing up design docs and implementing remote caching/execution.
  • P. Oscar Boykin - for putting tons of time and effort into the Scala rules, as well as being a tireless supporter on Twitter.

Thank you all, keep the discussion and bug reports coming!

Using Bazel in a continuous integration system

When doing continuous integration, you do not want your build to fail because a a tool invoked during the build has been updated or some environmental conditions have changed. Because Bazel is designed for reproducible builds and keeps track of almost every dependency of your project, Bazel is a great tool for use inside a CI system. Bazel also caches results of previous build, including test results and will not re-run unchanged tests, speeding up each build.

Running Bazel on virtual or physical machines.

For, we use Google Compute Engine virtual machine for our Linux build and a physical Mac mini for our Mac build. Apart from Bazel tests that are run using the ./ script, we also run some projects to validate Bazel binaries against: the Bazel Tutorial here, re2 here, protobuf here, and TensorFlow here.

Bazel is reinstalled each time we run the tutorial or TensorFlow, but the Bazel cache is maintained across installs. The setup for those jobs is the following:

set -e

# Fetch the Bazel installer
export BAZEL_INSTALLER=${PWD}/bazel-installer/
curl -L -o ${BAZEL_INSTALLER} ${URL}

# Install bazel inside ${BASE}
  --base="${BASE}" \
  --bazelrc="${BASE}/bin/bazel.bazelrc" \

# Run the build
BAZEL="${BASE}/binary/bazel --bazelrc=${BASE}/bin/bazel.bazelrc"
${BAZEL} test //...

This tests installing a specific version of Bazel each time. Of course, if Bazel is installed on the path, one can simply bazel test //.... However, even with reinstalling all the time, Bazel caching simply works.

Running Bazel inside a Docker container

Several people want to use Bazel in a Docker container. First of all, Bazel has some feature that are incompatibles with Docker:

  • Bazel runs by default in client/server mode using UNIX domain sockets, so if you cannot mount the socket inside the Docker container, then you must disable client-server communication by running Bazel in batch mode with the --batch flag.
  • Bazel sandboxes all actions on linux by default and this needs special privileges in the Docker container (enabled by --privilege=true. If you cannot enable the namespace sandbox, you can deactivate it in Bazel with the --genrule_strategy=standalone --spawn_strategy=standalone flags.

So the last step of the previous script would look like:

# Run the build
BAZEL="${BASE}/binary/bazel --bazelrc=${BASE}/bin/bazel.bazelrc --batch"
${BAZEL} test --genrule_strategy=standalone --spawn_strategy=standalone \

This build will however be slower because the server has to restart for every build and the cache will be lost when the Docker container is destroyed.

To prevent the loss of the cache, it is better to mount a persistent volume for ~/.cache/bazel (where the Bazel cache is stored).

Return code and XML output

A final consideration when setting up a continuous integration system is getting the result from the build. Bazel has the following interesting exit codes when using test and build commands:

Exit Code Description
0 Success.
1 Build failed.
2 Command Line Problem, Bad or Illegal flags or command combination, or Bad Environment Variables. Your command line must be modified.
3 Build OK, but some tests failed or timed out.
4 Build successful but no tests were found even though testing was requested.
8 Build interrupted (by a Ctrl+C from the user for instance) but we terminated with an orderly shutdown.

These return codes can be used to determine the reason for a failure (in, we mark builds that have exited with exit code 3 as unstable, and other non zero code as failed).

You can also control how much information about test results Bazel prints out with the --test_output flag. Generally, printing the output of test that fails with --test_output=errors is a good setting for a CI system.

Finally, Bazel's built-in JUnit test runner generates Ant-style XML output file (in bazel-testlogs/pkg/target/test.xml) that summarizes the results of your tests. This test runner can be activated with the --nolegacy_bazel_java_test flag (this will soon be the default). Other tests also get a basic XML output file that contains only the result of the test (success or failure).

To get your test results, you can also use the Bazel dashboard, an optional system that automatically uploads Bazel test results to a shared server.

Persistent Worker Processes for Bazel

Bazel runs most build actions as a separate process. Many build actions invoke a compiler. However, starting a compiler is often slow: they have to perform some initialization when they start up, read the standard library, header files, low-level libraries, and so on. That’s why some compilers and tools have a persistent mode, e.g. sjavac, Nailgun and gcc server. Keeping a single process for longer and passing multiple individual requests to the same server can significantly reduce the amount of duplicate work and cut down on compile times.

In Bazel, we have recently added experimental support for delegating work to persistent worker processes that run as child processes of and are managed by Bazel. Our Javac wrapper (called JavaBuilder) is the first compiler that supports running as a worker.

We’ve tried the persistent JavaBuilder for a variety of builds and are seeing a ~4x improvement in Java build times, as Javac can now benefit from JIT optimizations over multiple runs and we no longer have to start a new JVM for every compile action. For Bazel itself, we saw a reduction in build time for a clean build from ~58s to ~16s (on repeated builds).

Full build Incremental build

If you often build Java code, we’d like you to give it a try. Just pass --strategy=Javac=worker to enable it or add build --strategy=Javac=worker to the .bazelrc in your home directory or in your workspace. Check the WorkerOptions class for flags to further tune the workers’ behavior or run “bazel help” and look for the “Strategy options” category. Let us know how it works for you.

We’re currently using a simple protobuf-based protocol to communicate with the worker process. Let us know if you want to add support for more compilers; in many cases, you can do that without any Bazel changes. However, the protocol is still subject to change based on your feedback.

About Sandboxing

We've only added sandboxing to Bazel two weeks ago, and we've already seen a flurry of fixes to almost all of the rules to conform with the additional restrictions imposed by it.

What is sandboxing?

Sandboxing is the technique of restricting the access rights of a process. In the context of Bazel, we're mostly concerned with restricting file system access. More specifically, Bazel's file system sandbox contains only known inputs, such that compilers and other tools can't even see files they should not access.

(We currently also mount a number of system directories into the sandbox to allow running locally installed tools and make it easier to write shell scripts. See below.)

Why are we sandboxing in Bazel?

We believe that developers should never have to worry about correctness, and that every build should result in the same output, regardless of the current state of the output tree. If a compiler or tool reads a file without Bazel knowing it, then Bazel won't rerun the action if that file has changed, leading to incorrect incremental builds.

We would also like to support remote caching in Bazel, where incorrect reuse of cache entries is even more of a problem than on the local machine. A bad cache entry in a shared cache affects every developer on the project, and the equivalent of 'bazel clean', namely wiping the entire remote cache, rather defeats the purpose.

In addition, sandboxing is closely related to remote execution. If the build works well with sandboxing, then it will likely work well with remote execution - if we know all the inputs, we can just as well upload them to a remote machine. Uploading all files (including local tools) can significantly reduce maintenance costs for compile clusters compared to having to install the tools on every machine in the cluster every time you want to try out a new compiler or make a change to an existing tool.

How does it work?

On Linux, we're using user namespaces, which are available in Linux 3.8 and later. Specifically, we create a new mount namespace. We create a temporary directory into which we mount all the files that the subprocess is allowed to see. We then use pivot_root to make the temporary directory appear as the root directory for all subprocesses.

We also mount /proc, /dev/null, /dev/zero, and a temporary filesystem (tmpfs) on /tmp. We mount /dev/random and /dev/urandom, but recommend against their usage, as it can lead to non-reproducible builds.

We currently also mount /bin, /etc, /usr (except /usr/local), and every directory starting with /lib, to allow running local tools. In the future, we are planning to provide a shell with a set of Linux utilities, and to require that all other tools are specified as inputs.

What about Mac and Windows?

We are planning to implement sandboxing for OS X (using OS X sandboxing, see our roadmap) and eventually Windows as well.

What about networking?

At some point, we'd like to also reduce network access, probably also using namespaces, with a separate opt-out mechanism.

How do I opt-out of sandboxing?

Preferably, you should make all your rules and scripts work properly with sandboxing. If you need to opt out, you should talk to us first - at Google, the vast majority of actions is fully sandboxed, so we have some experience with how to make it work. For example, Bazel has a special mechanism to add information about the current user, date, time, or the current source control revision to generated binaries.

If you still need to opt out for individual rules, you can add the local = 1 attribute to genrule or *_test calls.

If you're writing a custom rule in Skylark, then you cannot currently opt out. Instead, please file a bug and we'll help you make it work.

Bazel Builder Blasts Beyond Beta Barrier

Reposted from Google's Open Source blog.

We're excited to announce the Beta release of Bazel, an open source build system designed to support a wide variety of different programming languages and platforms.

There are lots of other build systems out there -- Maven, Gradle, Ant, Make, and CMake just to name a few. So what's special about Bazel? Bazel is what we use to build the large majority of software within Google. As such, it has been designed to handle build problems specific to Google's development environment, including a massive, shared code repository in which all software is built from source, a heavy emphasis on automated testing and release processes, and language and platform diversity. Bazel isn't right for every use case, but we believe that we're not the only ones facing these kinds of problems and we want to contribute what we've learned so far to the larger developer community.

Our beta release provides:

Check out the tutorial app to see a working example using several languages.

We still have a long way to go. Looking ahead towards our 1.0.0 release, we plan to provide Windows support, distributed caching, and Go support among other features. See our roadmap for more details and follow our blog or Twitter account for regular updates. Feel free to contact us with questions or feedback on the mailing list or IRC (#bazel on freenode).

By Jeff Cox, Bazel team

Build dashboard dogfood

WARNING: This feature has been removed (2017-04-19).

We've added a basic dashboard where you can see and share build and test results. It's not ready for an official release yet, but if any adventurous people would like to try it out (and please report any issues you find!), feel free to give it a try.

First, you'll need to download or clone the dashboard project.

Run bazel build :dash && bazel-bin/dash and add this line to your ~/.bazelrc:

build --use_dash --dash_url=http://localhost:8080

Note that the bazel build will take a long time to build the first time (the dashboard uses the AppEngine SDK, which is ~160MB and has to be downloaded). The "dash" binary starts up a local server that listens on 8080.

With --use_dash specified, every build or test will publish info and logs to http://localhost:8080/ (each build will print a unique URL to visit).

See the README for documentation.

This is very much a work in progress. Please let us know if you have any questions, comments, or feedback.

Building deterministic Docker images with Bazel

Docker images are great to automate your deployment environment. By composing base images, you can create an (almost) reproducible environment and, using an appropriate cloud service, easily deploy those image. However, V1 Docker build suffers several issues:

  1. Docker images are non-hermetic as they can run any command,
  2. Docker images are non-reproducible: each "layer" identifier is a random hex string (and not cryptographic hash of the layer content), and
  3. Docker image builds are not incremental since Docker assumes that RUN foo always does the same thing.

Googlers working on Google Container Registry developed a support for building reproducible Docker images using Skylark / Bazel that address these problems. We recently shipped it.

Of course, it does not support RUN command, but the rule also strips timestamps of the tar file and use a SHA sum that is function of the layer data as layer identifier. This ensure reproducibility and correct incrementality.

To use it, simply creates your images using the BUILD language:

load("/tools/build_defs/docker/docker", "docker_build")

   name = "foo",
   tars = [ "base.tar" ],

   name = "bar",
   base = ":foo",
   debs = [ "blah.deb" ],
   files = [ ":bazinga" ],
   volumes = [ "/asdf" ],

This will generate two docker images loadable with bazel run :foo and bazel run :bar. The foo target is roughly equivalent to the following Dockerfile:

FROM bazel/base

And the bar target is roughly equivalent to the following Dockerfile:

FROM bazel/foo
RUN dpkg -i blah.deb
ADD bazinga /

Using remote repositories, it is possible to fetch the various base image for the web and we are working on providing a docker_pull rule to interact more fluently with existing images.

You can learn more about this docker support here.

Trimming your (build) tree

Reposted from @kchodorow's blog.

Jonathan Lange wrote a great blog post about how Bazel caches tests. Basically: if you run a test, change your code, then run a test again, the test will only be rerun if you changed something that could actually change the outcome of the test. Bazel takes this concept pretty far to minimize the work your build needs to do, in some ways that aren't immediately obvious.

Let's take an example. Say you're using Bazel to "build" rigatoni arrabiata, which could be represented as having the following dependencies:

Each food is a library which depends on the libraries below it. Suppose you change a dependency, like the garlic:

Bazel will stat the files of the "garlic" library and notice this change, and then make a note that the things that depend on "garlic" may have also changed:

The fancy term for this is "invalidating the upward transitive closure" of the build graph, aka "everything that depends on a thing might be dirty." Note that Bazel already knows that this change doesn't affect several of the libraries (rigatoni, tomato-puree, and red-pepper), so they definitely don't have to be rebuilt.

Bazel will then evaluate the "sauce" node and figures out if its output has changed. This is where the secret sauce (ha!) happens: if the output of the "sauce" node hasn't changed, Bazel knows that it doesn't have to recompile rigatoni-arrabiata (the top node), because none of its direct dependencies changed!

The sauce node is no longer “maybe dirty” and so its reverse dependencies (rigatoni-arrabiata) can also be marked as clean.

In general, of course, changing the code for a library will change its compiled form, so the "maybe dirty" node will end up being marked as "yes, dirty" and re-evaluated (and so on up the tree). However, Bazel's build graph lets you compile the bare minimum for a well-structured library, and in some cases avoid compilations altogether.

Configuring your Java builds

Let say that you want to build for Java 8 and errorprone checks off but keep the tools directory provided with Bazel in the package path, you could do that by having the following rc file:

build --javacopt="-extra_checks:off"
build --javacopt="-source 8"
build --javacopt="-target 8"

However, the file would becomes quickly overloaded, especially if you take all languages and options into account. Instead, you can tweak the java_toolchain rule that specifies the various options for the java compiler. So in a BUILD file:

    name = "my_toolchain",
    encoding = "UTF-8",
    source_version = "8",
    target_version = "8",
    misc = [

And to keep it out of the tools directory (or you need to copy the rest of the package), you can redirect the default one in a bazelrc:

build --java_toolchain=//package:my_toolchain

In the future, toolchain rules should be the configuration points for all the languages but it is a long road. We also want to make it easier to rebind the toolchain using the bind rule in the WORKSPACE file.

Sharing your rc files

You can customize the options Bazel runs with in your ~/.bazelrc, but that doesn't scale when you share your workspace with others.

For instance, you could de-activate Error Prone's DepAnn checks by adding the --javacopt="-Xep:DepAnn:OFF" flag in your ~/.bazelrc. However, ~/.bazelrc is not really convenient as it a user file, not shared with your team. You could instead add a rc file at tools/bazel.rc in your workspace with the content of the bazelrc file you want to share with your team:

build --javacopt="-Xep:DepAnn:OFF"

This file, called a master rc file, is parsed before the user rc file. There is three paths to master rc files that are read in the following order:

  1. tools/bazel.rc (depot master rc file),
  2. /path/to/bazel.bazelrc (alongside bazel rc file), and
  3. /etc/bazel.bazelrc (system-wide bazel rc file).

The complete documentation on rc file is here.

Checking your Java errors with Error Prone.

We recently open-sourced our support for Error Prone. Error Prone checks for common mistakes in Java code that will not be caught by the compiler.

We turned Error Prone on by default but you can easily turn it off by using the Javac option --extra_checks:off. To do so, simply specify --javacopt='-extra_checks:off' to the list of Bazel's options. You can also tune the checks error-prone will perform by using the -Xep: flags.

See the documentation of Error Prone for more on Error Prone.

Visualize your build

Reposted from Kristina Chodorow's blog.

Bazel lets you see a graph of your build dependencies. It could help you debug things, but honestly it's just really cool to see what your build is doing.

To try it out, you'll need a project that uses Bazel to build. If you don't have one handy, here's a tiny workspace you can use:

$ git clone
$ cd tiny-workspace

Make sure you've downloaded and installed Bazel and have the following line to your ~/.bazelrc:

query --package_path %workspace%:[path to bazel]/base_workspace

Now run bazel query in your tiny-workspace/ directory, asking it to search for all dependencies of //:main and format the output as a graph:

$ bazel query 'deps(//:main)' --output graph >

This creates a file called, which is a text representation of the build graph. You can use dot (install with sudo apt-get install graphviz) to create a png from this:

$ dot -Tpng < > graph.png

If you open up graph.png, you should see something like this:

You can see //:main depends on one file (// and four targets (//:x, //tools/cpp:stl, //tools/default:crosstool, and //tools/cpp:malloc). All of the //tools targets are implicit dependencies of any C++ target: every C++ build you do needs the right compiler, flags, and libraries available, but it crowds your result graph. You can exclude these implicit dependencies by removing them from your query results:

$ bazel query --noimplicit_deps 'deps(//:main)' --output graph >

Now the resulting graph is just:

Much neater!

If you're interested in further refining your query, check out the docs on querying.

Stickers for Contributors

Bazel stickers

We just got Bazel stickers and we'd like to send them to all of the people who have sent us pull requests and patches over the last month. If you'd like some stickers, please send us your Github username and mailing address.

Let us know if you've done any of the following and we'll send you stickers:

  • Gone through a Gerrit code review.
  • Opened a pull request on GitHub.
  • Sent us a patch on the mailing list.
  • Are in the process of doing any of the things above.

Thanks for your contributions, we really appreciate them.

Tell us about your Bazel project!

We're setting up a list of projects using Bazel. If you'd like us to list your project on, send us the following information:

  1. The project's name.
  2. The language(s) it's using.
  3. Whether it uses Bazel + another build system or just Bazel.
  4. Any nice surprises/blockers you've run into using Bazel.
  5. Any other info or comments you have!

If you don't want your project publicly listed, we'd still love to hear about it. Please email us directly and let us know.

Support for Bash Shell Completion

We just pushed a support for shell completion in the Bourne-Again Shell. It eases the use of Bazel by expanding its commands and the targets to build.

To use this new functionality, build the //scripts:bash_completion target from the Bazel repository: bazel build //scripts:bash_completion

This will create a bazel-bin/scripts/bazel-complete.bash completion script. You can copy then copy this script to your completion directory (/etc/bash_completion.d in Ubuntu). If you don't want to install it globally or don't have such a directory, simply add the following line to your ~/.bashrc or ~/.bash_profile (the latter is the recommended for OS X): source /path/to/bazel/bazel-bin/scripts/bazel-complete.bash

After that you should be able to type the tab key after the bazel command in your shell and see the list of possible completions.

If you are interested in supporting other shells, the script is made up of two parts:

  1. scripts/bazel-complete-header.bash is the completion logic.
  2. bazel help completion dumps the list of commands of Bazel, their options and for commands and options that expect a value, a description of what is expected. This description is either:
  • an enum of values enclosed into brackets, e.g., {a,b,c};
  • a type description, currently one of:

    • label, label-bin, label-test, label-package for a Bazel label for, respectively, a target, a runnable target, a test, and a package,
    • path for a filesystem path,
    • info-key for one of the information keys as listed by bazel info;
  • a combination of possible values using | as a separator, e.g, path|{or,an,enum}'.

Let us know if you have any questions or issues on the mailing list or GitHub.

Announcing simplified workspace creation

To create a new workspace, you can now simply create an empty WORKSPACE file in a directory.

Previously, you'd need to copy or symlink the tools directory into your project, which was unpopular:

'move my-project/ to be a subdirectory of base_workspace/' Ok. Ctrl-W.

Miguel Alcon came up with a great idea for making this process simpler. Now the script will create a .bazelrc file in your home directory which tells Bazel where was run from and, thus, where it can find its tools when you build.

To use this new functionality, get the latest version of the code from Github, run ./, and then create a Bazel workspace by running touch WORKSPACE in any directory.

Some caveats to watch out for:

  • If you move the directory where Bazel was built you will need to update your ~/.bazelrc file.
  • If you would like to use different tools than the ones finds/generates, you can create a tools/ directory in your project and Bazel will attempt to use that instead of the system-wide one.

See the getting started docs for more info about setting up your workspace.

Let us know if you have any questions or issues on the mailing list or GitHub.

Hello World

Welcome to the Bazel blog! We'll be using this forum for news and announcements.


For more frequent updates, follow us on Twitter.


Join the discussion at our mailing list.


Subscribe to our blog via the RSS Feed or via email:

Delivered by FeedBurner