In 2013, Docker took a Linux construct that had existed for years—containerization—and commoditized it, wrapping it in a convenient interface that made it accessible to a wide audience. Since then, container adoption has blossomed. Much of the web runs on containers. Everything at Google, from Search, through Gmail, to YouTube, runs in containers. Containers are inescapable!
Inescapable, that is, except by one corner of the enterprise. The database.
Databases are special. Data is the most valuable commodity in an organization and for many, the most precious data is entrusted to its Oracle databases. A failed web server is quickly rebuilt or exchanged. Data is not so easily reconstructed. But we must be careful not to mistake or confuse the value of data with the database itself. Database hosts are little more than compute that runs software and interact with storage. Objectively, there’s nothing truly special about them and as enterprises move toward a consolidated, containerized platform, it’s inevitable that databases will be pressured to follow suit. This is why database technologists must understand the use cases and techniques needed to reliably and securely run databases on Docker and similar platforms.
Docker opens a world of possibilities for database users. Its low resource requirements, speed, portability, and flexibility empower users to do more, and do it faster than they might with other virtualization solutions. This is most evident in desktop and non-production environments, where Docker is used to:
• Evaluate performance
• Test code and application changes
• Validate backup and recovery methods
• Harden environments
• Practice database upgrades and migrations
• Provision gold images and datasets
• Prove architectures for sharding, replication, and disaster recovery
As an example of what Docker can do, I’m currently working with Oracle on a bug in a customer’s disaster recovery system. It’s a production environment that can’t be used to test solutions, and the problem itself is difficult to reproduce. Normally, it would take time to document the configuration, then wait for Oracle Support to successfully simulate and observe the issue. However, using Docker, I was able to recreate multiple databases and script out the process to simulate the behavior and provide it to Oracle Support—as code—significantly shortening the cycle. Checking different scenarios and evaluating fixes is easy, too. It takes just a few minutes to rebuild four interdependent databases to a known starting point. Anyone, whether they’re using Windows, Mac, or Linux, will see the same behavior because all of the dependencies are built in to the container images themselves.
Building a similar lab environment with virtual machines takes far more time and effort. Four databases in containers will run comfortably on a laptop that can’t support the same number as virtual environments.
Join me in April for a series of webinars on running Oracle on Docker. On April 14, my session “Getting Started with Oracle on Docker” covers everything you need to know to start running Oracle databases in Docker. Then, on April 21, “Advanced Docker Recipes: Building Complex Oracle Environments” gets into more involved scenarios, including how to build Data Guard with Docker. Finally, on April 28, I present “Run Oracle Database Upgrades on Docker” and demonstrate how to create containers for testing and evaluating upgrades from 11g and 12c to 18c, 19c, and beyond. This is an excellent method for practicing and perfecting the upgrade process and guarantee it goes smoothly when it matters most!
Sean Scott, Oracle ACE Director.