LWN.net Weekly Edition for March 12, 2015
File sharing, streamlining, and support plans with ownCloud 8
Version 8.0 of the ownCloud web-service platform was released in February. As was the case with previous releases, a basic installation offers a variety of cloud-like services for managing information: shared file storage, contact and calendar synchronization, online document editing, and so forth. The project also supports an API on top of which a variety of third-party web apps can run. The new release brings with it a renewed effort to make installing and managing these add-on apps easier and more reliable, plus several tools to make running one's own, private ownCloud server simpler. Finally, the company that underwrites ownCloud's development has announced that users who run such private server installations will be able to purchase support plans—something that was previously reserved only for enterprise customers.
The 8.0 release comes about eight months after the last major update, 7.0. The project makes builds available in a variety of formats, from source archives to installer bundles intended for use on shared web hosting plans. Packages for a variety of Linux distributions are also available for download. There are desktop applications available for managing shared folders, and an Android app for device synchronization (the app, interestingly enough, is a for-pay offering in Google's Play Store, but is available for free through F-Droid).
Users interested in testing out ownCloud 8 on a publicly reachable server (as opposed to installing it locally on their own machine) also have an opportunity to do that. The project has a three-hour "test drive" program available through a web hosting provider. The trial offers 1GB of storage space and is fairly painless to set up (although one must still walk through the hosting company's full setup process, including frustrating steps like trying to guess at an available subdomain name).
There are a few changes in the project's release practices worth pointing out, though. First, in the past, there were two separate editions of ownCloud: the Community Edition and the Enterprise Edition—the latter being aimed at businesses and coupled with paid support plans from ownCloud, Inc. As of 8.0, the Community Edition has been renamed "ownCloud server" (although not all of the references on the web site have been updated to reflect this).
There are still functional differences between the offerings: the Enterprise version features integration with services likely to be necessary in corporate IT environments (like Microsoft SharePoint and Oracle databases), and it adds support for using some different file-storage back-ends (including Amazon S3 and Ceph) as primary storage. But, as of the 8.0 release, the extra functionality in the Enterprise edition comes via a separate set of Enterprise apps and different default configuration, not from a different server codebase. And non-Enterprise users can still use Amazon S3 and Ceph for storage—they simply do not come configured as the primary back-end storage layers.
The second change is that, starting with version 8, the project is moving to a time-based release schedule with an accompanying version-numbering scheme. Version 8.1 is scheduled to arrive in three months, followed by two more quarterly point releases (8.2 and 8.3), with 9.0 set to arrive one year from now.
Last, but certainly not least, ownCloud Inc. has announced that it will offer commercial support plans for users running the "server" (i.e., non-Enterprise) version of ownCloud 8. The support plans are on the low end compared to the Enterprise offerings—users get email support only, and only during 8-to-5 business hours (those hours being measured from offices in Europe or on the East or West coasts of the US). But that is still, hopefully, a more reliable tech-support avenue than asking questions on a community mailing list or IRC channel, and it may produce another revenue stream to support development.
So far, the company has managed to not build different features into the community edition and enterprise edition of the server, which is reassuring to see. Prior to version 8, there was an additional API in the enterprise edition; as will be discussed later, this has now been merged into the community version, too. There are also community-built substitutes available for several of the enterprise apps (such as logging or Shibboleth authentication).
To the cloud
![[ownCloud file sharing]](https://static.lwn.net/images/2015/03-owncloud-share-sm.png)
All in all, the changes found in the 8.0 release fall into a few general categories. A lot of work has gone into making user-interface (UI) improvements, both on the user-visible side and in the administrative interface. There are also a handful of new and updated features. Finally, the new release integrates some changes to the way third-party apps are designed and deployed—changes that may primarily interest app developers at present, but should make for a better user experience in the long run.
On the UI front, there is a new interface for working with shared files. In the web interface, one can open a pop-up dialog for each stored file and folder to change the sharing settings. There is a download link to provide to everyone who needs access to the file, plus straightforward password-protection and time-expiration checkboxes to limit that access when necessary. Any active sharing enabled for a file is also visible in the file browser thanks to an indicator that appears next to the file name.
There is also a "favorites" feature that, at the moment, is fairly limited in scope: the user can star files in the main file browser, then access these "favorite" files in a separate sidebar. But the project indicates that there is more to come here: "favorites" are just the first metadata field tracked by the application. The plan is to roll out additional metadata filters (like "recently used" and "recently changed") in future updates.
The 8.0 release notes also tout an improved search interface, although my tests found this feature to be a mixed bag. It is, indeed, remarkably fast at showing search results (and the search box is available on every screen, which is key). But it only appears to search the contents of the current folder—not including subfolders—which leaves quite a bit to be desired. That is particularly frustrating because the release notes include a screenshot indicating that ownCloud-wide search ought to be supported.
![[ownCloud administration]](https://static.lwn.net/images/2015/03-owncloud-admin-sm.png)
Interface improvements are available on the administrative side as well, which (in a practical sense) is likely to be just as important as UI improvements on the user side—considering how many early ownCloud users run their own server. In particular, the various administrative tasks have been streamlined into a single page with handy links in the sidebar to the important sections. There are also improved tools for managing large numbers of user accounts and use groups, letting administrators search and sort on multiple fields, apply changes to multiple selected users, edit existing group names, and so on—features that were unsupported in the past.
Finally, app installation has been significantly simplified. The available third-party apps are listed in an app-browser reminiscent of Firefox's current add-on browser. Each available app has a single "install" button, version and update information is clearly listed for each app, and there is a one-click tool for restricting access to each app by user group.
Behind the clouds
Under the hood, the revamped app-management system also marks a functional change. In previous ownCloud releases, the download bundle included an entire suite of add-on apps that were not enabled in the default settings. That made activating them rapid, of course, but it also made for a much larger download. Starting in version 8.0, only the basic file-storage and sync apps come built in; all of the others (including standard apps developed by the project, like Calendar and Contacts), are downloaded when they are installed from the web interface.
Another set of less-visible changes affect file sharing. Starting with version 8.0, file sharing supports federation—that is, a folder can be shared directly between two ownCloud instances running on different hosts, not just between one ownCloud instance and a desktop machine. Users set up a federated share by entering [email protected] in the "Share with a user or group" field. At the moment, that relies on the user already knowing the correct username and address of the other ownCloud server, but it is a step in the right direction, and is more secure than emailing a public link to the folder in question.
The other new file-sharing feature is support for downloading a file directly from its underlying storage (e.g., Dropbox, Amazon's S3, a Gluster server). By bypassing the need to funnel the download through the ownCloud server, this should significantly speed up file access when large groups of people work on the same set of files, or for ownCloud servers that simply have a lot of user accounts.
For third-party app developers, ownCloud 8.0 also includes some changes to app packaging and development. Dependency management is now built into ownCloud server; an app needs to include a list of any dependencies in an XML file, but the ownCloud server will automatically resolve those dependencies (where possible) when a user installs an app. That includes dependencies on underlying system tools (such as a database version or library) and specific PHP extensions, as well as simpler dependency issues like ensuring that the correct version of ownCloud itself is running on the server.
![[ownCloud app administration]](https://static.lwn.net/images/2015/03-owncloud-appadmin-sm.png)
There have also been a number of cleanups to the app API, with an emphasis on providing a more stable and predictable platform for app developers. Evidently, in previous releases, it was far from uncommon for a third-party app to rely directly on ownCloud's internal PHP classes and methods, leading to obvious stability problems across upgrades. The project has updated its developer documentation and tutorials to reflect this; users may only notice the change when they encounter less breakage in third-party apps.
There is also one entirely new API available in ownCloud 8.0: the user provisioning API, which enables external tools to query and change various user account settings like storage quotas, and to create or modify users and groups. It is most useful from an administrative standpoint, but it is interesting to note that the API was originally an Enterprise-Edition-only feature that has now been added to the non-Enterprise edition.
Evaluating the changes in ownCloud 8.0 can be a subjective affair. What one gets out of ownCloud depends on how one intends to use it. As a replacement for proprietary cloud services like Google Drive and Google Calendar, the latest version is easy to use and just as powerful. How one feels about all the additional apps might vary somewhat—I found the Documents collaborative-editor app to be a bit more awkward and less integrated, for instance.
But the project is doing well to focus on the core—whatever other apps anyone uses, everyone needs access to files of some sort. It will also be interesting to see how the support plans for non-Enterprise customers fare as a fundraising endeavor. Other free-software web-application projects would, no doubt, like to find a reliable revenue stream that does not hinge on "open core" shenanigans or charging for commodities like file storage. Perhaps lightweight end-user support, if done right, could be just such an opportunity.
A GPL-enforcement suit against VMware
When Karen Sandler, the executive director of the Software Freedom Conservancy, spoke recently at the Linux Foundation's Collaboration Summit, she spent some time on the Linux Compliance Project, an effort to improve compliance with the Linux kernel's licensing rules. This project, launched with some fanfare in 2012, has been relatively quiet ever since. Karen neglected to mention that this situation was about to change; that had to wait for the announcement on March 5 of the filing of a lawsuit against VMware alleging copyright infringement for its use of kernel code. This suit, regardless of its outcome, should help to bring some clarity to the question of what constitutes a derived work of the kernel.
In her talk, Karen said that the Conservancy gets "passionate requests"
for enforcement of the GNU General Public License (GPL) from two distinct
groups: "ideological developers" and corporate general counsels. The
interest from the developers is clear: they released their code under the
GPL for a reason, and they want its terms to be respected. On the other
hand, a typical general counsel releases little code under any license. Their
interest, instead, is in a demonstration that the GPL has teeth so that they
can be taken seriously when they tell management that the company must
comply with the license terms of the code it ships.
The VMware suit should bring some comfort to both groups, in that it targets the primary product of a prominent company that has long been seen in some circles as pushing the boundaries of the GPL. But, beyond that, the suit will be of interest to the larger group of people that would like more clarity on just where the "derived work" line is drawn.
The complaint
The complaint has been filed in Hamburg, Germany, in the name of kernel developer Christoph Hellwig; the Conservancy is helping to fund the case and the lawyer involved is Till Jaeger, who also represented Harald Welte in his series of successful compliance cases. It focuses on the "vmkernel" component of VMware's vSphere ESXi 5.5.0 hypervisor product — one of VMware's primary sources of revenue.
VMware openly uses Linux as part of the ESXi product, and it ships the source for (presumably) all of the open-source components it uses; that code can be downloaded from VMware's web site. But ESXi is not a purely open-source product; it also contains a proprietary component called "vmkernel." The bootstrap process starts with Linux, which loads a module called "vmklinux." That module, in turn, loads the vmkernel code that does the actual work of implementing the hypervisor functionality. [Update: in truth, newer versions of ESXi no longer need the initial Linux bootstrap; in current versions, vmkernel boots directly.]
To many, the mere fact that vmkernel was once loaded into the kernel by a module is enough to conclude that it is a derived product of the kernel and, thus, only distributable under the terms of the GPL. That would make an interesting case in its own right, but there is more to it than that. It would seem that vmkernel loads and uses quite a bit of Linux kernel code, sometimes in heavily modified form. The primary purpose for this use appears to gain access to device drivers written by Linux, but supporting those drivers requires bringing in a fair amount of core code as well.
If one downloads the source-release ISO image from the page linked above and untars vmkdrivers-gpl/vmkdrivers-gpl.tgz, one will find these components under vmkdrivers/src_92/vmklinux_92. There is some interesting stuff there. In vmware/linux_rcu.c, for example, is an "adapted" version of an early read-copy-update implementation from Linux. vmware/linux_signal.c contains signal-handling code, vmware/linux_task.c contains process-management code (including an implementation of schedule()), and so on. Of particular interest to this case are linux/lib/radix-tree.c (a copy of the kernel's radix tree implementation) and several files in the vmware directory containing a modified copy of the kernel's SCSI subsystem. Both of these subsystems carry Christoph's copyrights and, thus, give him the standing to pursue an infringement case against VMware.
The picture that emerges suggests that vmkernel is not just another binary-only kernel module making use of the exported interface. Instead, VMware's developers appear to have taken a substantial amount of kernel code, adapted it heavily, and built it directly into vmkernel itself. It seems plausible that, in a situation like this, the case that vmkernel is a derived product of the Linux kernel would be relatively easy to make.
Unfortunately, we cannot see the complaint itself, because "
In her talk, Karen stated that litigation is the Conservancy's last resort
after every other approach fails to obtain compliance. Certainly there can
be no accusations of a rush to litigation here; the first indications
of trouble emerged in 2007. The Conservancy raised the issue with
VMware a number of times with no luck.
Christoph approached VMware in August 2014
with his own request for compliance, starting a series of communications
that did
not lead to an agreement. There was a meeting in December where, it is
said, VMware wanted to propose a settlement but only under strict
non-disclosure terms — terms which Christoph refused. So, it seems, going
to court is about the only remaining option.
One might wonder about the choice to file in Germany. The FAQ
says:
It is worth adding that Germany's courts seem to be relatively friendly
toward this sort of claim, with the result that previous GPL-enforcement
cases filed there have tended to go well for the plaintiffs. The ability
to pick the battlefield is a powerful advantage in a dispute of this
nature.
Filing an enforcement lawsuit is an intimidating prospect for a number of
reasons. Karen's talk noted that there is a lot of tension around the topic of
GPL enforcement. Some people would rather that it were not done at all,
seeing it as an incentive for companies to avoid GPL-licensed code. There
are not many developers who want to make a stand in an enforcement effort;
the Linux Compliance Project, she said, contains a number of kernel
developers, but almost none of them want to stick their necks out in an
actual enforcement effort.
But, she said, there is value in such efforts. Companies worldwide spend
vast amounts of money to ensure that they are in compliance with
free-software licenses. In the absence of enforcement, some will certainly
question the value and necessity of that expense — and some will decide not
to bother. There are also highly successful projects that have resulted
from enforcement efforts; router distributions like OpenWrt are usually
featured at the top of that list. GPL enforcement, by making it clear that
everybody needs to play by the rules, is, she said, performing a service to
the community as a whole.
How that service plays out in this case is going to be interesting to
watch, which is good, since we are likely to be watching for some time.
Given that ESXi is at the core of VMware's business, VMware seems unlikely to
either release the code or withdraw the product willingly. So the case may
have to go all the way through trial, and perhaps through appeals as well.
But, at the end, perhaps we'll have a clearer idea of what constitutes a
derived product of the kernel; that could be seen to be a useful service
even if the enforcement effort itself fails.
Since opening its doors in 2008, GitHub has grown to become the largest
active project-hosting service for open-source software. But it has
also attracted a fair share of criticism for some of its
implementation choices—with one of the leading complaints being
that it takes a lax approach to software licensing. That, in turn,
leads to a glut of repositories bearing little or no licensing
details. The company recently announced a new tool to help combat the
license-confusion issue: a site-wide API for querying and reporting
license information. Whether that API is up to the task, however,
remains to be seen.
By way of background information, GitHub does not require users to
choose a license when setting up a new project. An existing project
can also be forked into a new repository with one click, but nothing
subsequently prevents the new repository's owner from changing or
removing the upstream license information (if it exists).
From a legal standpoint, of course, the fork inherits its
license from upstream automatically (unless the upstream project is
public domain or under some other less-common license). But from a
practical standpoint, this provenance is difficult to
trace. Throw in other GitHub users submitting pull requests for
patches that have no license information, and one has a recipe for
confusion.
The bigger problem, however, is that the majority of GitHub repositories
carry no license information at all, because the users who own them
have not chosen to add such information. In 2013, GitHub introduced
its first tool designed to combat that issue, launching ChooseALicense.com, a web site
that explains the features and differences of popular FOSS licenses.
ChooseALicense.com allows GitHub users to select a license, and the GitHub
new-project-configuration page has a license selector, but using it is
not obligatory. In fact, the ChooseALicense.com home page includes
the following as its last option:
That "no license" link, incidentally, attempts to explain the downside of selecting no license—most notably, it strongly discourages other
developers (both FOSS and proprietary) from using or redistributing
the code in any fashion, for fear of getting entangled in a copyright
problem. But the page also points out that the GitHub
terms
of service dictate that other users have the right to view and
fork any GitHub repository.
One could probably quibble endlessly over the details of
ChooseALicense.com and its wording. The upshot, though, is that it
did not have a serious impact on the license-confusion problem. A
March 9 post
on the GitHub blog presented some startling statistics: that less than 20%
of GitHub repositories have a license, and that the percentage is declining.
The introduction of the license-selection tool in 2013 produced a
spike in licensed repositories, followed by a downward trend that
continues to the present. The post also included some statistics on license
popularity; the three licenses featured most prominently on the
license-chooser site (MIT, Apache, and GPLv2) are, unsurprisingly, the
most often selected.
This data set, however, is far from complete; as the post
explains, the team only logged licenses that were found in a file
named LICENSE, and only matched that file's contents against
a short set of known licenses. Nevertheless, GitHub did evidently
determine that the problem was real enough to warrant a new attempt at
a solution.
The team's answer is a new site-wide API called, fittingly, the Licenses API.
It is currently in preview, which means that interested developers
must supply a special HTTP header with any requests in order to access it.
But the API is, at least currently, a frustratingly limited one.
It offers just three functions:
Arguably the biggest limitation is that, as was the case with the statistics
gathered for the blog post, the license of a repository is determined
only by examining the contents of a LICENSE file. On the
plus side, the license information returned by the API conforms to the
Software Package Data Exchange (SPDX) specification, which should make it easy to integrate with
existing software.
To be sure, determining and counting licenses is not a simple
matter—as many in the community know. In 2013, for example, a
pair of presentations at the Free Software Legal and Licensing
Workshop explored several strategies for
tabulating statistics on FOSS license usage. Both presentations ended
with caveats about the difficulty of the problem—whatever
methodology is used to approach it.
Nevertheless, the GitHub Licenses API does appear to be strangely
naive in its approach. For example, it is well-established that a
significant number of projects place their license in a file named
COPYING, rather than LICENSE, because that has long
been the convention used by the GNU project. Even scanning for that
filename (or other obvious candidates, like GPL.txt) would
enhance the quality of the data available significantly. Far better
would be allowing the repository owner to designate what file contains
the license.
Furthermore, the Licenses API could be used to accumulate more
meaningful statistics, such as which forks include different license
information than their corresponding upstream repository, but there is
no indication yet that GitHub intends to pursue such a survey. It may
fall on volunteers in the community to undertake that sort of
work. There are, after all, multiple source-code auditing tools that are
compatible with SPDX and can be used to audit license information and
compliance. Regrettably, the GitHub Licenses API does not look like it will
lighten that workload significantly, since the information it returns
is so restricted in scope.
GitHub is right to be concerned about the paucity of license
information in the repositories hosted at its site. But both the
2013 license chooser and the new Licenses API seem to
stem from an assumption on GitHub's part that the reason so many
repositories lack licenses is that license selection is either
confusing or difficult to find information on. Neither effort strikes
at the heart of the problem: that GitHub makes license selection
optional and, thus, makes licensing an afterthought.
SourceForge has long required new projects to select a license while
performing the initial project setup. Later, when Google Code
supplanted SourceForge as the hosting service of choice, it, too,
required the user to select a license during the first step. So too
do Launchpad.net, GNU Savannah, and BerliOS. FedoraHosted and Debian's
Alioth both involve manually requesting access to create a new
project, a process that, presumably, involves discussing whether or
not the project will be released under a license compatible with that distribution.
It is hard to escape the fact that only GitHub and its direct
competitors (like Gitorious and GitLab) fail to raise the licensing
question during project setup, and equally hard to avoid the
conclusion that this is why they are littered with so many
non-licensed and mis-licensed repositories. An API for querying
licenses may be a positive step, but it is not
likely to resolve the problem, since it side-steps the underlying
issue.
Hopefully, the current form of the Licenses API is merely the
beginning, and GitHub will proceed to develop it into a truly useful
tool. There is certainly a need for one, and being the most active
project-hosting provider means that GitHub is best positioned to do
something about it.
court
proceedings are not public by default in Germany (unlike in the
USA)
", according to the FAQ maintained by the Conservancy.
A service to the community
GitHub unveils its Licenses API
None of the above
A new interface
Power to choose
Security
Progress in security module stacking
It would seem that a long-running saga in kernel development may be coming to a close. Stacking (also composing or chaining) of Linux Security Modules (LSMs) has been discussed, debated, and developed in kernel security circles for many years; we have looked at the issue from a number of angles starting in 2009 (and here), but patches go back to at least 2004. After multiple fits and starts, it looks like something might finally make its way into the mainline kernel.
In a nutshell, the problem is that any security enhancements that are suggested for the kernel are inevitably pushed toward the LSM API. But there can only be one LSM active in a given kernel instance and most distributions already have that slot filled. Linux capabilities would logically be implemented in an LSM, but that would conflict with any other module that was loaded. To get around that problem, capabilities have been hardwired into each LSM, so that the capability checks are done as needed by those modules. The Yama LSM has also been manually stacked, if it is configured into the kernel, by calling its four hooks before the hooks from the active LSM are called. These are ad hoc solutions that cannot really be used for additional modules that might need to all be active, so a better way has been sought.
The last time we looked in on the issue was after the 2013 Linux Security Summit (LSS). Smack creator Casey Schaufler, who has been the most recent one to push stacking, presented his solution to attendees; he was looking for feedback on his approach. Schaufler's proposal was a complex solution that attempted to solve "all" of the stacking problems at once. In particular, it allowed using more than one of the LSMs that provide a full security model (the so-called "monolithic" LSMs: SELinux, Smack, TOMOYO, and AppArmor), which is a bit hard to justify in some eyes. For most, the pressing need for stacking is to support several single-purpose LSMs atop one of those monolithic security models, much like is done with Yama.
In addition, Schaufler's patches tried to handle network packet labeling for multiple LSMs (to the extent possible) and added to the user-space interface under /proc/PID/attr. Each active LSM would have a subdirectory under attr with its attributes, while one LSM, chosen through a configuration option, would present its attributes in the main attr directory. These additions also added complexity, so the consensus that emerged from the 2013 LSS attendees was to go back to the basics.
Schaufler has been working on that simplification. The 21st version of the
patch set was posted on March 9,
though the changes in this round are mostly just tweaks. The previous version picked
up an ack from Yama developer Kees Cook, was tested by SELinux developer
Stephen Smalley, and got a "this version looks almost perfect
"
from TOMOYO developer Tetsuo Handa. It looks like it could get into
security maintainer James
Morris's branch targeting the -next tree, which might mean we will see it
in 4.1.
The approach this time is a return to a much simpler world. Gone are the thoughts of stacking more than one monolithic LSM; this proposal creates a mechanism to stack the LSM hooks and to consult them when trying to decide on access requests. The interface for a given LSM used to be a struct security_operations that was filled out with pointers for each of the hooks to be called when making access decisions. That has been replaced with a union (security_list_options) that can hold a pointer to each of the different hook functions. That union is meant to allow for a single list type that can hold any of the hook functions, but still provide type checking.
Instead of filling in the sparse security_operations structure, LSMs now initialize an array that contains each of their hooks. That gets handed off to the security_add_hooks() function that adds the hooks to the lists for each hook that the LSM infrastructure maintains internally. Those lists are initialized with the capabilities hooks; Yama hooks are then added if that LSM is configured for the kernel. For the rest of the LSMs, all of which are monolithic, only one can be chosen at boot time to have its hooks added to the list.
When an access decision needs to be made, the hooks are called in the order that they were added. Unlike some previous iterations, the access checking will terminate when any of the hooks on the list denies access. If none do, then the access is allowed.
That puts all of the machinery in place to provide stacking, but it doesn't allow choosing more than one of the monolithic LSMs on any given kernel boot. Multiple monolithic LSMs can be configured into the kernel, and one be specified as the default, but that can be overridden with the security= kernel boot parameter. New LSMs could be added to the kernel code, like Yama has been, but those will presumably be configured into the kernel at build time.
Currently, Yama is the only smaller LSM in the tree and it is chosen (or not) at build time; the others are either not optional (capabilities) or can only have a single chosen representative added into the hook list at kernel initialization time. Essentially, Schaufler's patches avoid multiple monolithic modules that are active in a given boot by not providing a mechanism to choose more than one. That avoids the conflicts and complexity that earlier attempts had run aground on. As he noted:
Another change that Schaufler has made is to split the security.h header file for LSMs in two: one for the internal, common LSM-handling mechanism (which stays in security.h) and one that defines the hooks and macros that will be used by LSMs (which is contained in the new lsm_hooks.h file). While that change is large in terms of lines of code, it is largely janitorial, but it will make the interface boundaries clearer.
If Schaufler's patches make it into the mainline, that may spur some of the smaller out-of-tree LSMs to "come in from the cold" and get submitted to the mainline. It may also help to remove the "single LSM" barrier that crops up when new security protections are proposed for the kernel. Providing a mechanism to support these kinds of protections, while steering clear of core kernel code, could lead to more of those protections in the mainline and, eventually, available in distributions. It will be interesting to see what that leads.
Brief items
Security quote of the week
But what it highlights is the fact that we're living in a world where we can't easily tell the difference between a couple of guys in a basement apartment and the North Korean government with an estimated $10 billion military budget. And that ambiguity has profound implications for how countries will conduct foreign policy in the Internet age.
Exploiting the DRAM rowhammer bug to gain kernel privileges
The Project Zero blog looks at the "Rowhammer" bug. "“Rowhammer” is a problem with some recent DRAM devices in which repeatedly accessing a row of memory can cause bit flips in adjacent rows. We tested a selection of laptops and found that a subset of them exhibited the problem. We built two working privilege escalation exploits that use this effect. One exploit uses rowhammer-induced bit flips to gain kernel privileges on x86-64 Linux when run as an unprivileged userland process. When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory." (Thanks to Paul Wise)
New vulnerabilities
389-ds-base: multiple vulnerabilities
Package(s): | 389-ds-base | CVE #(s): | CVE-2014-8105 CVE-2014-8112 | ||||||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | March 26, 2015 | ||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: An information disclosure flaw was found in the way the 389 Directory Server stored information in the Changelog that is exposed via the 'cn=changelog' LDAP sub-tree. An unauthenticated user could in certain cases use this flaw to read data from the Changelog, which could include sensitive information such as plain-text passwords. (CVE-2014-8105) It was found that when the nsslapd-unhashed-pw-switch 389 Directory Server configuration option was set to "off", it did not prevent the writing of unhashed passwords into the Changelog. This could potentially allow an authenticated user able to access the Changelog to read sensitive information. (CVE-2014-8112) | ||||||||||||||||||||||||||||||
Alerts: |
|
autofs: privilege escalation
Package(s): | autofs | CVE #(s): | CVE-2014-8169 | ||||||||||||||||||||||||||||||||
Created: | March 11, 2015 | Updated: | December 22, 2015 | ||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
The automount service autofs was updated to prevent a potential privilege escalation via interpreter load path for program-based automount maps. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
chromium-browser: multiple vulnerabilities
Package(s): | chromium-browser | CVE #(s): | CVE-2015-1213 CVE-2015-1214 CVE-2015-1215 CVE-2015-1216 CVE-2015-1217 CVE-2015-1218 CVE-2015-1219 CVE-2015-1220 CVE-2015-1221 CVE-2015-1222 CVE-2015-1223 CVE-2015-1224 CVE-2015-1225 CVE-2015-1226 CVE-2015-1227 CVE-2015-1228 CVE-2015-1229 CVE-2015-1230 CVE-2015-1231 | ||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | April 1, 2015 | ||||||||||||||||||||||||
Description: | From the Chromium changelogs: CVE-2015-1213: Out-of-bounds write in skia filters. CVE-2015-1214: Out-of-bounds write in skia filters. CVE-2015-1215: Out-of-bounds write in skia filters. CVE-2015-1216: Use-after-free in v8 bindings. CVE-2015-1217: Type confusion in v8 bindings. CVE-2015-1218: Use-after-free in dom. CVE-2015-1219: Integer overflow in webgl. CVE-2015-1220: Use-after-free in gif decoder. CVE-2015-1221: Use-after-free in web databases. CVE-2015-1222: Use-after-free in service workers. CVE-2015-1223: Use-after-free in dom. CVE-2015-1224: Out-of-bounds read in vpxdecoder. CVE-2015-1225: Out-of-bounds read in pdfium. CVE-2015-1226: Validation issue in debugger. CVE-2015-1227: Uninitialized value in blink. CVE-2015-1228: Uninitialized value in rendering. CVE-2015-1229: Cookie injection via proxies. CVE-2015-1230: Type confusion in v8. CVE-2015-1231: Various fixes from internal audits, fuzzing and other initiatives. | ||||||||||||||||||||||||||
Alerts: |
|
dokuwiki: access control circumvention
Package(s): | dokuwiki | CVE #(s): | CVE-2015-2172 | ||||||||||||||||||||
Created: | March 6, 2015 | Updated: | March 27, 2015 | ||||||||||||||||||||
Description: | From the Mageia advisory: DokuWiki before 20140929c has a security issue in the ACL plugins remote API component. The plugin failed to check for superuser permissions before executing ACL addition or deletion. This means everybody with permissions to call the XMLRPC API also had permissions to set up their own ACL rules and thus circumventing any existing rules. | ||||||||||||||||||||||
Alerts: |
|
ecryptfs-utils: information disclosure
Package(s): | ecryptfs-utils | CVE #(s): | CVE-2014-9687 | ||||||||||||||||||||
Created: | March 11, 2015 | Updated: | July 30, 2015 | ||||||||||||||||||||
Description: | From the Ubuntu advisory:
Sylvain Pelissier discovered that eCryptfs did not generate a random salt when encrypting the mount passphrase with the login password. An attacker could use this issue to discover the login password used to protect the mount passphrase and gain unintended access to the encrypted files. | ||||||||||||||||||||||
Alerts: |
|
glibc: denial of service
Package(s): | glibc | CVE #(s): | CVE-2014-8121 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | September 28, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: It was found that the files back end of Name Service Switch (NSS) did not isolate iteration over an entire database from key-based look-up API calls. An application performing look-ups on a database while iterating over it could enter an infinite loop, leading to a denial of service. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
glusterfs: denial of service
Package(s): | glusterfs | CVE #(s): | CVE-2014-3619 | ||||||||||||||||
Created: | March 11, 2015 | Updated: | April 27, 2015 | ||||||||||||||||
Description: | From the openSUSE advisory:
glusterfs was updated to fix a fragment header infinite loop denial of service attack. | ||||||||||||||||||
Alerts: |
|
gnupg: multiple vulnerabilities
Package(s): | gnupg | CVE #(s): | CVE-2014-3591 CVE-2015-0837 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | June 6, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Fedora bug reports: A side-channel attack which can potentially lead to an information leak. (CVE-2014-3591) A side-channel attack on data-dependent timing variations in modular exponentiation, which can potentially lead to an information leak. (CVE-2015-0837) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2015-0275 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | March 9, 2015 | Updated: | March 16, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A flaw was found in the way the Linux kernel's EXT4 filesystem handled page size > block size condition when fallocate zero range functionality is used. Also from the Red Hat bugzilla, no CVE provided: It was reported that in vhost_scsi_make_tpg() the limit for "tpgt" is UINT_MAX but the data type of "tpg->tport_tpgt" and that is a u16. In the context it turns out that in vhost_scsi_set_endpoint(), "tpg->tport_tpgt" is used as an offset into the vs_tpg[] array which has VHOST_SCSI_MAX_TARGET (256) elements, so anything higher than 255 then is invalid. Attached patch corrects this. In vhost_scsi_send_evt() the values higher than 255 are masked, but now that the limit has changed, the mask is not needed. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2014-8172 CVE-2014-8173 CVE-2015-0274 | ||||||||||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: A flaw was found in the way the Linux kernel's XFS file system handled replacing of remote attributes under certain conditions. A local user with access to XFS file system mount could potentially use this flaw to escalate their privileges on the system. (CVE-2015-0274) It was found that due to excessive files_lock locking, a soft lockup could be triggered in the Linux kernel when performing asynchronous I/O operations. A local, unprivileged user could use this flaw to crash the system. (CVE-2014-8172) A NULL pointer dereference flaw was found in the way the Linux kernel's madvise MADV_WILLNEED functionality handled page table locking. A local, unprivileged user could use this flaw to crash the system. (CVE-2014-8173) | ||||||||||||||||||||||||||||||||||
Alerts: |
|
lftp: automatically accepting ssh keys
Package(s): | lftp | CVE #(s): | |||||||||
Created: | March 5, 2015 | Updated: | March 11, 2015 | ||||||||
Description: | From the Red Hat bugzilla entry:
It was reported that lftp saves unknown host's fingerprint in known_hosts without any prompt | ||||||||||
Alerts: |
|
libarchive: directory traversal
Package(s): | libarchive | CVE #(s): | CVE-2015-2304 | ||||||||||||||||||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | March 30, 2015 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory: Alexander Cherepanov discovered that bsdcpio, an implementation of the 'cpio' program part of the libarchive project, is susceptible to a directory traversal vulnerability via absolute paths. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libssh2: information leak
Package(s): | libssh2 | CVE #(s): | CVE-2015-1782 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | March 11, 2015 | Updated: | December 22, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
Mariusz Ziulek reported that libssh2, a SSH2 client-side library, was reading and using the SSH_MSG_KEXINIT packet without doing sufficient range checks when negotiating a new SSH session with a remote server. A malicious attacker could man in the middle a real server and cause a client using the libssh2 library to crash (denial of service) or otherwise read and use unintended memory areas in this process. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mapserver: command execution
Package(s): | mapserver | CVE #(s): | CVE-2013-7262 | ||||||||||||
Created: | March 9, 2015 | Updated: | March 20, 2015 | ||||||||||||
Description: | From the CVE entry:
SQL injection vulnerability in the msPostGISLayerSetTimeFilter function in mappostgis.c in MapServer before 6.4.1, when a WMS-Time service is used, allows remote attackers to execute arbitrary SQL commands via a crafted string in a PostGIS TIME filter | ||||||||||||||
Alerts: |
|
maradns: denial of service
Package(s): | maradns | CVE #(s): | |||||||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||||||
Description: | From the Mageia advisory: maradns versions prior to 1.4.16 are vulnerable to a DoS-vulnerability through which a malicious authorative DNS-server can cause an infinite chain of referrals. | ||||||||||||||
Alerts: |
|
mod-gnutls: restriction bypass
Package(s): | mod-gnutls | CVE #(s): | CVE-2015-2091 | ||||||||
Created: | March 11, 2015 | Updated: | March 16, 2015 | ||||||||
Description: | From the Debian advisory:
Thomas Klute discovered that in mod-gnutls, an Apache module providing SSL and TLS encryption with GnuTLS, a bug caused the server's client verify mode not to be considered at all, in case the directory's configuration was unset. Clients with invalid certificates were then able to leverage this flaw in order to get access to that directory. | ||||||||||
Alerts: |
|
openstack-glance: denial of service
Package(s): | openstack-glance | CVE #(s): | CVE-2014-9623 | ||||||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||||||
Description: | From the Red Hat advisory: A storage quota bypass flaw was found in OpenStack Image (glance). If an image was deleted while it was being uploaded, it would not count towards a user's quota. A malicious user could use this flaw to deliberately fill the backing store, and cause a denial of service. | ||||||||||||||
Alerts: |
|
openssh: authentication bypass
Package(s): | openssh | CVE #(s): | CVE-2014-9278 | ||||||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||||||
Description: | From the Red Hat advisory: It was found that when OpenSSH was used in a Kerberos environment, remote authenticated users were allowed to log in as a different user if they were listed in the ~/.k5users file of that user, potentially bypassing intended authentication restrictions. | ||||||||||||||
Alerts: |
|
oxide-qt: denial of service
Package(s): | oxide-qt | CVE #(s): | CVE-2015-2238 | ||||
Created: | March 10, 2015 | Updated: | March 11, 2015 | ||||
Description: | From the CVE entry:
Multiple unspecified vulnerabilities in Google V8 before 4.1.0.21, as used in Google Chrome before 41.0.2272.76, allow attackers to cause a denial of service or possibly have other impact via unknown vectors. | ||||||
Alerts: |
|
percona-toolkit: man-in-the-middle attack
Package(s): | percona-toolkit, xtrabackup | CVE #(s): | CVE-2015-1027 | ||||
Created: | March 11, 2015 | Updated: | March 11, 2015 | ||||
Description: | From the openSUSE advisory:
Percona XtraBackup was vulnerable to MITM attack which could allow exfiltration of MySQL configuration information via the --version-check option. | ||||||
Alerts: |
|
php: predictable cache filenames
Package(s): | PHP 5.3 | CVE #(s): | CVE-2013-6501 | ||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||
Description: | From the SUSE bug tracker: The php wdsl extension is reading predictable filename from a cache directory (default /tmp). Could allow injection of WSDL file. | ||||||||||
Alerts: |
|
pngcrush: denial of service
Package(s): | pngcrush | CVE #(s): | CVE-2015-2158 | ||||
Created: | March 11, 2015 | Updated: | March 11, 2015 | ||||
Description: | From the
pngcrush-1.7.84 fixes defects reported by Coverity-scan, so it should be more resistant to crashes due to malformed input files, such as the one presented in CVE-2015-2158. | ||||||
Alerts: |
|
powerpc-utils: information disclosure
Package(s): | powerpc-utils | CVE #(s): | CVE-2014-4040 | ||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||
Description: | From the Red Hat advisory: A flaw was found in the way the snap utility of powerpc-utils generated an archive containing a configuration snapshot of a service. A local attacker could obtain sensitive information from the generated archive such as plain text passwords. | ||||||
Alerts: |
|
putty: information disclosure
Package(s): | putty, filezilla | CVE #(s): | CVE-2015-2157 | ||||||||||||||||||||||||||||
Created: | March 9, 2015 | Updated: | March 29, 2015 | ||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
PuTTY suite versions 0.51 to 0.63 fail to clear SSH-2 private key information from memory when loading and saving key files to disk, leading to potential disclosure. The issue affects keys stored on disk in encrypted and unencrypted form, and is present in PuTTY, Plink, PSCP, PSFTP, Pageant and PuTTYgen. | ||||||||||||||||||||||||||||||
Alerts: |
|
python: missing hostname check
Package(s): | python | CVE #(s): | CVE-2014-9365 | ||||||||||||||||||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||||||||||||||||||
Description: | From the Mageia advisory: When Python's standard library HTTP clients (httplib, urllib, urllib2, xmlrpclib) are used to access resources with HTTPS, by default the certificate is not checked against any trust store, nor is the hostname in the certificate checked against the requested host. It was possible to configure a trust root to be checked against, however there were no faculties for hostname checking (CVE-2014-9365). Note that this issue also affects python3, and is fixed upstream in version 3.4.3, but the fix was considered too intrusive to backport to Python3 3.3.x. No update for the python3 package for this issue is planned at this time. | ||||||||||||||||||||||||||
Alerts: |
|
qpid-cpp: multiple vulnerabilities
Package(s): | qpid-cpp | CVE #(s): | CVE-2015-0203 CVE-2015-0223 CVE-2015-0224 | ||||||||||||||||||||||||||||
Created: | March 10, 2015 | Updated: | June 22, 2015 | ||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
It was discovered that the Qpid daemon (qpidd) did not restrict access to anonymous users when the ANONYMOUS mechanism was disallowed. (CVE-2015-0223) Multiple flaws were found in the way the Qpid daemon (qpidd) processed certain protocol sequences. An unauthenticated attacker able to send a specially crafted protocol sequence set could use these flaws to crash qpidd. (CVE-2015-0203, CVE-2015-0224) | ||||||||||||||||||||||||||||||
Alerts: |
|
redhat-access-plugin-openstack: information disclosure
Package(s): | redhat-access-plugin-openstack | CVE #(s): | CVE-2015-0271 | ||||||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||||||
Description: | From the Red Hat advisory: It was found that the local log-viewing function of the redhat-access-plugin for OpenStack Dashboard (horizon) did not sanitize user input. An authenticated user could use this flaw to read an arbitrary file with the permissions of the web server. | ||||||||||||||
Alerts: |
|
tiff: multiple vulnerabilities
Package(s): | tiff | CVE #(s): | CVE-2014-8127 CVE-2014-8128 CVE-2014-8129 CVE-2014-8130 CVE-2014-9655 CVE-2015-1547 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | March 9, 2015 | Updated: | November 2, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory:
LibTIFF was updated to fix various security issues that could lead to crashes of the image decoder. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
vlc: code execution
Package(s): | vlc | CVE #(s): | CVE-2014-6440 | ||||||||
Created: | March 6, 2015 | Updated: | March 11, 2015 | ||||||||
Description: | From the Mageia advisory: VLC versions before 2.1.5 contain a vulnerability in the transcode module that may allow a corrupted stream to overflow buffers on the heap. With a non-malicious input, this could lead to heap corruption and a crash. However, under the right circumstances, a malicious attacker could potentially use this vulnerability to hijack program execution, and on some platforms, execute arbitrary code. | ||||||||||
Alerts: |
|
xen: multiple vulnerabilities
Package(s): | xen | CVE #(s): | CVE-2015-2044 CVE-2015-2045 CVE-2015-2151 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | March 11, 2015 | Updated: | March 23, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
Multiple security issues have been found in the Xen virtualisation solution: CVE-2015-2044: Information leak via x86 system device emulation. CVE-2015-2045: Information leak in the HYPERVISOR_xen_version() hypercall. CVE-2015-2151: Missing input sanitising in the x86 emulator could result in information disclosure, denial of service or potentially privilege escalation. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.0-rc3, released on March 8. "Back on track with a Sunday afternoon release schedule, since there was nothing particularly odd going on this week, and no last-minute bugs that I knew of and wanted to get fixed holding things up."
Stable updates: 3.19.1, 3.18.9, 3.14.35, and 3.10.71 were all released on March 7.
Quotes of the week
The kernel's code of conflict
A brief "code of conflict" was merged into the kernel's documentation directory for the 4.0-rc3 release. The idea is to describe the parameters for acceptable discourse without laying down a lot of rules; it also names the Linux Foundation's technical advisory board as a body to turn to in case of unacceptable behavior. This document has been explicitly acknowledged by a large number of prominent kernel developers.Sasha Levin picks up 3.18 maintenance
By the normal schedule, the 3.18 stable update series is due to come to an end about now. In this case, though, Sasha Levin has decided to pick up the maintenance for this kernel, so updates will continue coming through roughly the end of 2016.
Kernel development news
Progress on persistent memory
It has been "the year of persistent memory" for several years now, Matthew Wilcox said with a chuckle to open his plenary session at the 2015 Storage, Filesystem, and Memory Management summit in Boston on March 9. Persistent memory refers to devices that can be accessed like RAM, but will permanently store any data written to them. The good news is that there are some battery-backed DIMMs already available, but those have a fairly small capacity at this point (8GB, for example). There are much larger devices coming, 400GB was mentioned, but it is not known when they will be shipping. From Wilcox's talk, it is clear that the two different classes of devices will have different use cases, so they may be handled differently by the kernel.
It is good news that there are "exciting new memory products" in development, he said, but it may still be some time before we see them on the market. He is not sure that we will see them this year, for example. It turns out that development delays sometimes happen when companies are dealing with "new kinds of physics".
![Matthew Wilcox [Matthew Wilcox]](https://static.lwn.net/images/2015/lsf-wilcox-sm.jpg)
Christoph Hellwig jumped in early on in the talk to ask if Wilcox's employer, Intel, would be releasing its driver for persistent memory devices anytime soon. Wilcox was obviously unhappy with the situation, but said that the driver could not be released until the ACPI specification for how the device describes itself to the system is released. That is part of the ACPI 6 process, which will be released "when ACPI gets around to it". As soon as that happens, Intel will release its driver.
James Bottomley noted that there is a process within UEFI (which oversees ACPI) to release portions of specifications if there is general agreement by the participants to do so. He encouraged Intel to take advantage of that process.
Another attendee asked whether it was possible to write a driver today that would work with all of the prototype devices tested but wouldn't corrupt any other of the other prototypes that had not been tested. Wilcox said no; at this point that isn't the case. "It is frustrating", he said.
Persistent memory and struct page
He then moved on to a topic he thought would be of interest to the memory-management folks in attendance. With a 4KB page size, and a struct page for each page, the 400GB device he mentioned would require 6GB just to track those pages in the kernel. That is probably too much space to "waste" for those devices. But if the kernel tracks the memory with page structures, it can be treated as normal memory. Otherwise, some layer, like a block device API, will be needed to access the device.
Wilcox has been operating under the assumption that those kinds of devices won't use struct page. On the other hand, Boaz Harrosh (who was not present at the summit) has been pushing patches for other, smaller devices, and those patches do use struct page. That makes sense for that use case, Wilcox said, but it is not the kind of device he has been targeting.
Those larger devices have wear characteristics that are akin to those of NAND flash, but it isn't "5000 cycles and the bit is dead". The devices have wear lifetimes of 107 or 108 cycles. In terms of access times, some are even faster than DRAM, he said.
Ted Ts'o suggested that the different capacity devices might need to be treated differently. Dave Chinner agreed, saying that the battery-backed devices are effectively RAM, while the larger devices are storage, which could be handled as block devices.
Wilcox said he has some preliminary patches to replace calls to get_user_pages() for these devices with a new call, get_user_sg(), which gets a scatter/gather list, rather than pages. That way, there is no need to have all those page structures to handle these kinds of devices. Users can treat the device as a block device. They can put a filesystem on it and use mmap() for data access.
That led to a discussion about what to do to handle a truncate() on a file that has been mapped with mmap(). Wilcox thinks that Unix, thus Linux, has the wrong behavior in that scenario. If a program accesses memory that is no longer part of the mapped file due to the truncation, it gets a SIGSEGV. Instead, he thinks that the truncate() call should be made to wait until the memory is unmapped.
Making truncate() wait is trivial to implement, Peter Zijlstra said, but it certainly changes the current behavior. He suggested adding a flag to mmap() to request this mode of operation. That should reduce the surprise factor as it makes the behavior dependent on what is being mapped. Ts'o said that he didn't think the kernel could unconditionally block truncate operations for hours or days without breaking some applications.
Getting back to the question of the drivers, Ts'o asked what decisions needed to be made and by when. The battery-backed devices are out there now, so patches to support them should go in soon, one attendee said. Hellwig said that it would make sense to have Harrosh's driver and the Intel driver in the kernel. People could then choose the one that made sense for their device. In general, that was agreeable, but the driver for the battery-backed devices still needs some work before it will be ready to merge. Bottomley noted that means that the group has decided to have two drivers, "one that needs cleaning up and one we haven't seen".
New instructions
Wilcox turned to three new instructions that Intel has announced for its upcoming processors that can be used to better support persistent memory and other devices. The first is clflushopt, which adds guarantees to the cache-line flush (clflush) instruction. The main benefit is that it is faster than clflush. Cache-line writeback (clwb) is another, which writes the cache line back to memory, but still leaves it in the cache. The third is pcommit, which acts as a sort of barrier to ensure that any prior cache flushes or writebacks actually get to memory.
The effect of pcommit is global for all cores. The idea is to do all of the flushes, then pcommit; when it is done, all that data will have been written. On current processors, there is no way to be sure that everything has been stored. He said that pcommit support still needs to be added to DAX, the direct access block layer for persistent memory devices that he developed.
Ts'o asked about other processors that don't have support for those kinds of instructions, but Wilcox didn't have much of an answer for that. He works for Intel, so other vendors will have to come up with their own solutions there.
There was also a question about adding a per-CPU commit, which Wilcox said was under internal discussion. But Bottomley thought that if there were more complicated options, that could just lead to more problems. Rik van Riel noted that the scheduler could move the process to a new CPU halfway through a transaction anyway, so the target CPU wouldn't necessarily be clear. In answer to another question, Wilcox assured everyone that the flush operations would not be slower than existing solutions for SATA, SAS, and others.
Error handling
His final topic was error handling. There is no status register that gives error indications when you access a persistent memory device, since it is treated like memory. An error causes a machine check, which typically results in a reboot. But if the problem persists, it could just result in another reboot when the device is accessed again, which will not work all that well.
To combat this, there will be a log of errors for the device that can be consulted at startup. It will record the block device address where problems occur and filesystems will need to be able to map that back to a file and offset, which is "not the usual direction for a filesystem". Chinner spoke up to say that XFS would have this feature "soon". Ts'o seemed to indicate ext4 would also be able to do it.
But "crashing is not a great error discovery technique", Ric Wheeler said; it is "moderately bad" for enterprise users to have to reboot their systems that way. But handling the problems when an mmap() is done for that bad region in the device is not easy either. Several suggestions were made (a signal from the mmap() call or when the page table entry is created, for example), but any of them mean that user space needs to be able to handle the errors.
In addition, Chris Mason said that users are going to expect to be able to mmap() a large file that has one bad page and still access all of the other pages from the file. That may not be reasonable, but is what they will expect. At that point, the discussion ran out of time without reaching any real conclusion on error handling.
[I would like to thank the Linux Foundation for travel support to Boston for the summit.]
Allowing small allocations to fail
As Michal Hocko noted at the beginning of his session at the 2015 Linux Storage, Filesystem, and Memory Management Summit, the news that the memory-management code will normally retry small allocations indefinitely rather than returning a failure status came as a surprise to many developers. Even so, this behavior is far from new; it was first added to the kernel in 2001. At that time, only order-0 (single-page) allocations were treated that way, but, as the years went by, that limit was raised repeatedly; in current kernels, anything that is order-3 (eight pages) or less will not normally be allowed to fail. The code to support this mode of operation has become more complex over time as well.Relatively late in the game, the __GFP_NOFAIL flag was added to specifically annotate the places in the kernel where failure-proof allocations are needed, but the "too small to fail" behavior has never been removed from other allocation operations. After 14 years, Michal said, there will certainly be many places in the code that depend on these semantics. That is unfortunate, since the failure-proof mode is error-prone and unable to deal with real-world situations like infinite retry loops outside of the allocator, locking conflicts, and out-of-memory (OOM) situations. The result is occasional lockups as described in this article.
There have been various attempts to get around the problem, such as adding
timeouts to the OOM killer (see this
article), but Michal thinks such approaches are "not nice." The proper
way to handle that kind of out-of-memory problem is to simply fail
allocation requests when the necessary resources are not available. Most
of the kernel already has code to check for and deal with such situations;
beyond that, the memory-management code should not be attempting to dictate
the failure strategy to the rest of the kernel.
Changing the allocator's behavior is relatively easy; the harder question is how to make such a change without introducing all kinds of hard-to-debug problems. The current code has worked for 14 years, so there will be many paths in the kernel that rely on it. Changing its behavior will certainly expose bugs.
Michal posted a patch just before the summit demonstrating the approach to the problem that he is proposing. That patch adds a new sysctl knob that controls how many times the allocator should retry a failed attempt before returning a failure status; setting it to zero disables retries entirely, while a setting of -1 retains the current behavior. There is a three-stage plan for the use of this knob. In the first stage, the default setting would be for indefinite retries, leaving the kernel's behavior unchanged. Developers and other brave people, though, would be encouraged to set the value lower. The hope is to find and fix the worst of the resulting bugs in this stage.
In the second stage, an attempt would be made to get distributors to change the default value. In the third and final stage, the default would be changed in the upstream kernel itself. Even in this stage, where, in theory, the bugs have been found, the knob would remain in place so that especially conservative users could keep the old behavior.
Michal opened up the discussion by asking if the assembled developers thought this was the right approach. Rik van Riel said that most kernel code can handle allocation failure just fine, but a lot of those allocations happen in system calls. In such cases, the failures will be passed back to user space; that is likely to break applications that have never seen certain system calls fail in this way before.
Ted Ts'o added that the kernel would mostly likely be stuck in the first stage for a very long time. As soon as distributions start changing the allocator's behavior, their phones will start ringing off the hook. In the ext4 filesystem, he has always been nervous about passing out-of-memory failures back to user space because of the potential for application problems. If the system call interface does that instead it won't be his fault, he said, but things will still break.
Peter Zijlstra observed that ENOMEM is a valid return from a system call. Ted agreed, but said that, after all these years, applications will break anyway, and then users will be knocking at his door. He went on to say that in large data-center settings (Google, for example) where the same people control both kernel and user space it should be possible to find and fix the resulting bugs. But just fixing the bugs in open-source programs is going to be a long process. In the end, he said, such a change is going to have to provide a noticeable benefit to users — a much more robust kernel, say — or we will be torturing them for no reason.
Andrew Morton protested that the code we have now seems to work almost all of the time. Given that the reported issues are quite rare, he asked, what problem are we actually trying to solve? Andrea Arcangeli noted that he'd observed lockups and that the OOM killer's relatively unpredictable behavior does not help. He tried turning off the looping in the memory allocator and got errors out of the ext4 filesystem instead. It was a generally unpleasant situation.
Andrew suggested that making the OOM killer work better might be a better place to focus energy, but Dave Chinner disagreed, saying that it was an attempt to solve the wrong problem. Rather than fix the OOM killer, it would be better to not use it at all. We should, he said, take a step back and ask how we got into the OOM situation in the first place. The problem is that the system has been overcommitted. Michal said that overcommitting of resources was just the reality of modern systems, but Dave insisted that we need to look more closely at how we manage our resources.
Andrew returned to the question of improving the OOM killer. Perhaps, he said, it could be made to understand lock dependencies and avoid potential deadlock situations. Rik suggested that was easier said than done, though; for example, an OOM-killed process may need to acquire new locks in order to exit. There will be no way for the OOM killer to know what those locks might be prior to choosing a victim to kill. Andrew acknowledged the difficulties but insisted that not enough time has gone into making the OOM killer work better. Ted said that OOM killer improvements were needed regardless of any other changes; since the allocator's default behavior cannot be changed for years, we will be stuck with the OOM killer for some time.
Michal was nervous about the prospect of messing with the OOM killer. We don't, he said, want to go back to the bad old days when its behavior was far more random than it is now. Dave said, though, that it is not possible to have a truly deterministic OOM killer if the allocation layers above it are not deterministic. It will behave differently every time it is tested. Until things are solidified in the allocator, the OOM killer is, he said, not the place to put effort.
The session wound down with Michal saying that starting to test kernels that fail small allocations will be helpful even if the distributors do not change the default for a long time. Dave said that he would turn off looping in the xfstests suite by default. There was some talk about the best values to use, but it seems it matters little as long as the indefinite looping is turned off. Expect to see a number of interesting bugs once this testing begins.
[Your editor would like to thank LWN subscribers for funding his travel to LSFMM 2015.]
Improving huge page handling
The "huge page" feature found in most contemporary processors enables access to memory with less stress on the translation lookaside buffer (TLB) and, thus, better performance. Linux has supported the use of huge pages for some years through both the hugetlbfs and transparent huge pages features, but, as was seen in the two sessions held during the memory-management track at LSFMM 2015, there is still considerable room for improvement in how this support is implemented.
Kirill Shutemov started off by describing his proposed changes to how
reference counting for transparent huge pages is handled. This patch set
was described in detail in this article
last November and has not changed significantly since. The key part of the
patch is that it allows a huge page to be simultaneously mapped in the PMD
(huge page) and PTE (regular page) modes. It is, as he acknowledged, a
large patch set, and there are still some bugs, so it is not entirely
surprising that this work has not been merged yet.
One remaining question has to do with partial unmapping of huge pages. When a process unmaps a portion of a huge page, the expected behavior is to split that page up and return the individual pages corresponding to the freed region back to the system. It is also possible, though, to split up the mapping while maintaining the underlying memory as a huge page. That keeps the huge page together and allows it to be quickly remapped if the process decides to do so. But that also means that no memory will actually be freed, so it is necessary to add the huge page to a special list where it can be truly split up should the system experience memory pressure.
Deferred splitting also helps the system to avoid another problem: currently there is a lot of useless splitting of huge pages when a process exits. There was some talk of trying to change munmap() behavior at exit time, but it is not as easy as it seems, especially since the exiting process may not hold the only reference to any given huge page.
Hugh Dickins, the co-leader of the session, pointed out that there is one
complication with
regard to
Kirill's patch set: he is not the only one who is working with simultaneous
PMD and PTE mappings of huge pages. Hugh recently posted a patch set of his own adding transparent huge page
support to the tmpfs filesystem. This work contains a number of the
elements needed for full support
for huge pages in the page cache (which is also an eventual goal of
Kirill's patches). But Hugh's approach is rather different, leading to
some concern in the user community; in the end, only one of these patch
sets is likely to be merged.
Hugh's first goal is to provide a more flexible alternative for users of the hugetlbfs filesystem. But his patches diverge from the current transparent huge page implementation (and Kirill's patches) in a significant way: they completely avoid the use of "compound pages," the mechanism used to bind individual pages into a huge page. Compound pages, he said, were a mistake to use with transparent huge pages; they are too inflexible for that use case. Peter Zijlstra suggested that, if this is really the case, Hugh should look at moving transparent huge pages away from compound pages; Hugh expressed interest but noted that available time was in short supply.
Andrea Arcangeli (the original author of the transparent huge pages feature) asked Hugh to explain the problems with compound pages. Hugh responded that the management of page flags is getting increasingly complicated when huge pages are mapped in the PTE mode. So he decided to do everything in tmpfs with ordinary 4KB pages. Kirill noted that this approach makes tmpfs more complex, but Hugh thought that was an appropriate place for the complexity to be.
When it comes to bringing huge page support to the page cache, though, it's not clear where the complexity should be. Hugh dryly noted that filesystem developers already have enough trouble with the memory-management subsystem without having to deal with more complex interfaces for huge page support. He was seemingly under the impression that there is not a lot of demand for this support from the filesystem side. Btrfs developer Chris Mason said, though, that he would love to find ways to reduce overhead on huge-memory systems, and that huge pages would help. Matthew Wilcox added that there are users even asking for filesystem support with extra-huge (1GB) pages.
Rik van Riel jumped in to ask if there were any specific questions that needed to be answered in this session. Hugh returned to the question of whether filesystems need huge page support and, if so, what form it should take, but not much discussion of that point ensued. There was some talk of Hugh's tmpfs work; he noted that one of the hardest parts was support for the mlock() system call. There is a lot of tricky locking involved; he was proud to have gotten it working.
In a brief return to huge page support in the page cache, it was noted that Kirill's reference-counting work can simplify things considerably; Andrea said it was attractive in many ways.
There was some talk of what to do when an application calls madvise() on a portion of a huge page with the MADV_DONTNEED command. It would be nice to recover the memory, but that involves an expensive split of the page. Failure to do so can create problems; they have been noted in particular with the jemalloc implementation of malloc(). See this page for a description of these issues.
Even if a page is split when madvise(MADV_DONTNEED) is called on a portion of it, there is a concern that the kernel might come around and "collapse" it back into a huge page. But Andrea said this should not be a problem; the kernel will only collapse memory into huge pages if the memory around those pages is in use. But, in any case, he said, user space should be taught to use 2MB pages whenever possible. Trying to optimize for 4KB pages on current systems is just not worth it and can, as in the jemalloc case, create problems of its own.
The developers closed out this session by agreeing to look more closely at both approaches. There is a lot of support for the principles behind Kirill's work. Hugh complained that he hasn't gotten any feedback on his patch set yet. While the patches are under review, Kirill will look into extending his work to the tmpfs filesystem, while Hugh will push toward support for anonymous transparent huge pages.
Compaction
The topic of huge pages returned on the second day, however, when Vlastimil Babka ran a session focused primarily on the costs of compaction. The memory compaction code moves pages around to create large, physically contiguous regions of free memory. These regions can be used to support large allocations in general, but they are especially useful for the creation of huge pages.
The problem comes in when a process incurs a page fault, and the kernel
attempts to resolve it by allocating a huge page. That task can involve
running compaction which, since it takes a while, can create significant
latencies for the faulting process. The cost can, in fact, outweigh the
performance benefits of using huge pages in the first place. There are
ways of mitigating this cost, but, Vlastimil wondered, might it be better
to avoid allocating huge pages in response to faults in the first place?
After all, it is not really known whether the process needs the entire huge page
or not; it's possible that much of that memory might be wasted. It seems
that this happens, once again, with the jemalloc library.
Since it is not possible to predict the benefit of supplying huge pages at fault time, Vlastimil said, it might be better to do a lot less of that. Instead, transparent huge pages should mostly be created in the khugepaged daemon, which can look at memory utilization and collapse pages in the background. Doing so requires redesigning khugepaged, which was mainly meant to be a last resort filling in huge pages when other methods fail. It scans slowly, and can't really tell if a process will benefit from huge pages; in particular, it does not know if the process will spend a lot of time running. It could be that the process mostly lurks waiting for outside events, or it may be about to exit.
His approach is to improve khugepaged by moving the scanning work that looks for huge page opportunities into process context. At certain times, such as on return from a system call, each process would scan a bit of its memory and, perhaps, collapse some pages into huge pages. It would tune itself automatically based partially on success rate, but also simply based on the fact that a process that runs more often will do more scanning. Since there is no daemon involved, there are no extra wakeups; if a system is wholly idle, there will be no page scanning done.
Andrea protested, though, that collapsing pages in khugepaged is far more expensive than allocating huge pages at fault time. To collapse a page, the kernel must migrate (copy) all of the individual small pages over to the new huge page that will contain them; that takes a while. If the huge page is allocated at page fault time, this work is not needed; the entire huge page can be faulted in at once. There might be a place for process-context scanning to create huge pages before they are needed, but it would be better, he said, to avoid collapsing pages whenever possible.
Vlastimil suggested allocating huge pages at fault time but only mapping the specific 4KB page that faulted; the kernel could then observe utilization and collapse the page in-place if warranted. But Andrea said that would needlessly deprive processes of the performance benefits that come from the use of huge pages. If we're going to support this feature in the kernel, we should use it fully.
Andi Kleen said that running memory compaction in process context is a bad idea; it takes away opportunities for parallelism. Compaction scanning should be done in a daemon process so that it can run on a separate core; to do otherwise would be to create excessive latency for the affected processes. Andrea, too, said that serializing scanning with execution was the wrong approach; he suggested putting that work into a workqueue instead. But Mel Gorman said he would rather see the work done in process context so that it can be tied to the process's activity.
At about this point the conversation wound down without having come to any firm conclusions. In the end, this is the sort of issue that is resolved over time with working code.
User-space page fault handling
Andrea Arcangeli's userfaultfd() patch set has been in development for a couple of years now; it has the look of one of those large memory-management changes that takes forever to find its way into the mainline. The good news in this case was announced at the beginning of this session in the memory-management track of the 2015 Linux Storage, Filesystem, and Memory Management Summit: there is now the beginning of an agreement with Linus that the patches are in reasonable shape. So we may see this code merged relatively soon.
The userfaultfd() patch set, in short, allows for the handling of
page faults in user space. This seemingly crazy feature was originally
designed for the migration of virtual machines running under KVM. The
running guest can move to a new host while leaving its memory behind,
speeding the migration. When that guest starts faulting in the missing
pages, the user-space mechanism can pull them across the net and store them
in the guest's address space. The result is quick migration without the
need to put any sort of page-migration protocol into the kernel.
Andrea was asked whether the kernel, rather than implementing the file-descriptor-based notification mechanism, could just use SIGBUS signals to indicate an access to a missing page. That will not work in this case, though. It would require massively increasing the number of virtual memory areas (VMAs) maintained in the kernel for the process, could cause system calls to fail, and doesn't handle the case of in-kernel page faults resulting from get_user_pages() calls. What's really needed is for a page fault to simply block the faulting process while a separate user-space process (the "monitor") is notified to deal with the issue.
Pavel Emelyanov stood up to talk about his use case for this feature, which is the live migration of containers using the checkpoint-restore in user space (CRIU) mechanism. While the KVM-based use case involves having the monitor running as a separate thread in the same process, the CRIU case requires that the monitor be running in a different process entirely. This can be managed by sending the file descriptor obtained from userfaultfd() over a socket to the monitor process.
There are, Pavel said, a few issues that come up when userfaultfd() is used in this mode. The user-space fault handling doesn't follow a fork() (it remains attached to the parent process only), so faults in the child process will just be resolved with zero-filled pages. If the target process moves a VMA in its virtual address space with mremap(), the monitor will see the new virtual addresses and be confused by them. And, after a fork, existing memory goes into the copy-on-write mode, making it impossible to populate pages in both processes. The conversation did not really get into possible solutions for these problems, though.
Andrea talked a bit about the userfaultfd() API, which has evolved in the past months. There is now a set of ioctl() calls for performing the requisite operations. The UFFDIO_REGISTER call is used to tell the kernel about a range of virtual addresses for which faults will be handled in user space. Currently the system only deals with page-not-present faults. There are plans, though, to deal with write-protect faults as well. That would enable the tracking of dirtied pages which, in turn, would allow live snapshotting of processes or the active migration of pages back to a "memory node" elsewhere on the network.
With regard to the potential live-snapshotting feature, most of the needed mechanism is already there. There is one little problem in that, should the target modify a page that is currently resident on the swap device, the resulting swap-in fault will make the page writable. So userfaultfd() will miss the write operation and the page will not be copied. Some changes to the swap code will be needed to add a write-protect bit to swap entries before this feature will work properly.
Earlier versions of the patch introduced a remap_anon_pages() system call that would be used to slot new pages into the target process's address space. In the current version, that operation has been turned into another ioctl() operation. Actually, there is more than one; there are now options to either copy a page into the target process or to remap the page directly. Zero-copy operation has a certain naive appeal, but it turns out that the associated translation lookaside buffer (TLB) flush is more expensive than simply copying the data. So the remap option is of limited use and unlikely to make it upstream.
Andrew Lutomirski worried that this feature was adding "weird semantics" to memory management. Might it be better, he said, to set up userfaultfd() as a sort of device that could then be mapped into memory with mmap()? That would isolate the special-case code and not change how "normal memory" behaves. The problem is that doing things this way would cause the affected memory range to lose access to many other useful memory-management features, including swapping, transparent huge pages, and more. It would, Pavel said, put "weird VMAs" into a process that really just "wants to live its own life" after migration.
As the discussion headed toward a close, Andrea suggested that userfaultfd() could perhaps be used to implement the long-requested "volatile ranges" feature. First, though, there is a need to finalize the API for this feature and get it merged; it is currently blocking the addition of the post-copy migration feature to KVM.
Fixing the contiguous memory allocator
Normally, kernel code goes far out of its way to avoid the need to allocate large, physically contiguous regions of memory, for a simple reason: the memory fragmentation that results as the system runs can make such regions hard to find. But some hardware requires these regions to operate properly; low-end camera devices are a common example. The kernel's contiguous memory allocator (CMA) exists to meet this need, but, as two sessions dedicated to CMA during the 2015 Linux Storage, Filesystem, and Memory Management Summit showed, there are a number of problems still to be worked out.CMA works by reserving a zone of memory for large allocations. But the device needing large buffers is probably not active at all times; keeping that memory idle when the device does not need it would be wasteful. So the memory-management code will allow other parts of the kernel to allocate memory from the CMA zone, but only if those allocations are marked as being movable. That allows the kernel to move things out of the way should the need for a large allocation arise.
Laura Abbott started off the session by noting that there are a number of
problems with CMA, relating to both the reliability of large allocations
and the performance of the system as a whole. There are a couple of
proposals out there to fix it — guaranteed
CMA by SeongJae Park and ZONE_CMA from
Joonsoo Kim — but no consensus on how to proceed. Joonsoo helped to lead
the session, as did Gioh Kim.
Peter Zijlstra asked for some details on what the specific problems are. A big one appears to be the presence of pinned pages in the CMA region. All it takes is one unmovable page to prevent the allocation of a large buffer, which is why pinned pages are not supposed to exist in the CMA area. It turns out that pages are sometimes allocated as movable, but then get pinned afterward. Many of these pins are relatively short-lived, but sometimes they can stay around for quite a while. Even relatively short-lived pins can be a problem, though; delaying the startup of a device like a camera can appear as an outright failure to the user.
One particular offender, according to Gioh, appears to be the ext4 filesystem which, among other things, is putting superblocks (which are pinned for as long as the associated filesystem is mounted) in movable memory. Other code is doing similar things, though. The solution in these cases is relatively straightforward: find the erroneous code and fix it. The complication here, according to Hugh Dickins, is that a filesystem may not know that a page will need to be pinned at the time it is allocated.
Mel Gorman suggested that, whenever a page changes state in a way that could
block a CMA allocation, it should be migrated immediately. Even something
as transient as pinning a dirty page for writeback could result in that
page being shifted out of the CMA area. It would be relatively simple to
put hooks into the memory-management code to do the necessary migrations.
The various implementations of get_user_pages() would be one
example; the page fault handler when a page is first dirtied would be
another. A warning could be added when get_page() is called to pin a
page in the CMA area to call attention to other problematic uses.
This approach, it was thought, could help to avoid the need for more
complex solutions within CMA itself.
Of course, that sort of change could lead to lots of warning noise for cases when pages are pinned for extremely short periods of time. Peter suggested adding a variant of get_page() to annotate those cases. Dave Hansen suggested, instead, that put_page() could be instrumented to look at how long the page was pinned and issue warnings for excessive cases.
The second class of problems has to do with insufficient utilization of the CMA area when the large buffers are not needed. Mel initially answered that CMA was simply meant to work that way and that it would not be possible to relax the constraints on the use of the CMA area without breaking it. It eventually became clear that the situation is a bit more subtle than that, but that had to wait until the second session on the following day.
It took a while to get to the heart of the problem on the second day, but Joonsoo finally described it as something like the following. The memory-management code tries to avoid allocations from the CMA area entirely whenever possible. As soon as the non-CMA part of memory starts to fill, though, it becomes necessary to allocate movable pages from the CMA area. But, at that point, memory looks tight, so kswapd starts running and reclaiming memory. The newly reclaimed memory, probably being outside of the CMA area, will be preferentially used for new allocations. The end result is that memory in the CMA area goes mostly unused, even when the system is under memory pressure.
Gioh talked about his use case, in which Linux is embedded in televisions.
There is a limited amount of memory in a TV; some of it must be reserved
for the processing of 3D or high-resolution streams. When that is not
being done, though, it is important to be able to utilize that memory for
other purposes. But the kernel is not making much use of that memory when
it is available; this is just the problem described by Joonsoo.
Joonsoo's solution involves adding a new zone (ZONE_CMA) to the memory-management subsystem. Moving the CMA area into a separate zone makes it relatively easy to adjust the policies for allocation from that area without, crucially, adding more hooks to the allocator's fast paths. But, as Mel said, there are disadvantages to this approach. Adding a zone will change how page aging is done, making it slower and more cache-intensive since there will be more lists to search. These costs will be paid only on systems where CMA is enabled so, he said, it is ultimately a CMA issue, but people should be aware that those costs will exist. That is the reason that a separate zone was not used for CMA from the beginning.
Dave suggested combining ZONE_CMA with ZONE_MOVABLE, which is also meant for allocations that can be relocated on demand. The problem there, according to Joonsoo, is that memory in ZONE_MOVABLE can be taken offline, while memory for CMA should not be unpluggable in that way. Putting CMA memory into its own zone also makes it easier to control allocation policies and to create statistics on the utilization of CMA memory.
The session ended with Mel noting that there did not appear to be any formal objections to the ZONE_CMA plan. But, he warned, the CMA developers, by going down that path, will be trading one set of problems for another. Since the tradeoff only affects CMA users, it will be up to them to decide whether it is worthwhile.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Janitorial
Memory management
Security-related
Page editor: Jonathan Corbet
Distributions
Ten years of Kubuntu
Kubuntu will turn ten years old this April. Kubuntu is a Linux distribution that has tried to remain true to the community that makes and uses it while working with the commercial sponsors and users who give it direction and help it succeed. Over the years, its technical, social, and commercial successes have been as fun as the challenges.
Fresh out of university in Scotland a decade ago, I'd learned about software development from leading a KDE project: the Umbrello UML Modeller. Now I've had the pleasure of being involved in the Kubuntu community for the lifespan of the project. Ubuntu celebrated its tenth anniversary last year. The Kubuntu story, creating a flavor of Ubuntu with KDE software, began six months later.
A new distribution
I first heard of the "Super Secret Debian Startup" (which became Canonical) while organizing a KDE stall at one of the commercial Linux exhibitions in London. A charismatic former spaceman named Mark Shuttleworth was hanging around with the Debian team. At the time KDE was the most popular of the two rival desktops, but to some of us it felt like it was on the descent, because GNOME had begun to focus on usability through simplicity and it got praise for its accessibility.
News of a new Linux distribution founded on the solid technical foundation
of Debian, but usable to non-enthusiasts, was exciting. However, I worried
the choices made by the distribution would mean that the community I had
grown to love and that had let me learn how to program and collaborate
would be left out. I wrote a blog
post to alert the KDE community to this forthcoming change in the Linux
distribution market, but there was muted response. "Not another Debian based distribution. We've had UserLinux and umpteen others
" was the first comment.
So I set about updating KDE packages, held back by one of Debian's long freezes. Ubuntu had made an impression by making a few configuration tweaks to Debian and GNOME and I tried to do the same for KDE by removing duplicated applications and excessive toolbars. When the first Ubuntu membership meeting was held, I was the first to be grilled about why I wanted to be part of the developer team and to have upload rights. Fortunately it's a welcoming process and much of it has been the inspiration for similar processes in community distributions created since.
The first Kubuntu
I launched Kubuntu 5.04 on April 8, 2005. KDE founder Matthias Ettrich, who was growing disillusioned with the lack of discipline in his creation, came onto IRC to congratulate me for putting together a nice setup. At the talk I gave at LugRadio Live, I was happy to be able to hand out as many Kubuntu CDs as I could fit in my car.
![[KDE Konqi in Ubuntu pose]](https://photos1.blogger.com/blogger/6101/1291/320/konqi.jpg)
After this success, I received a contract from Canonical to keep working on Kubuntu and to build a community around it. From the start, I wanted the project to be aligned with KDE in terms of developers and users as well as to keep the spirit of KDE's branding. It seemed the natural way to create a community around a distribution that used software from a project like KDE was to ensure that it worked well with that distribution. So I worked to get the configuration changes we'd made to the default desktop adopted into KDE. The reward for that work was being able to rely on KDE developers when the Kubuntu project got stuck on some technical details.
During 2005 we had the first Ubuntu Developer Summits (UDS). One perk of free software development is traveling the world to meet interesting people in interesting places. With Ubuntu, you got to do that in a private jet and stay at fancy hotels. These first summits were held in a single room with talks around tables. They differed from the open-source development conferences I'd been to before, which were based around presentations and hacking. UDS was based around writing project specifications and getting them proofread and approved. It was a deliberate attempt to bring some focus to the open development method; it worked well once everyone understood it and the bureaucracy was streamlined. It's another innovative community process that has been adopted by other projects such as Linaro and Qt.
In 2006, longtime KDE supporter SuSE, bought by Novell along with Ximian, was having internal struggles between supporters of the rival desktop camps. A couple of staff members were laid off and I took the opportunity to invite Ken Wimer to a design sprint in London for the first Long Term Support (LTS) release. There, we worked on the next major step in changing the way open-source software was produced and delivered. At the time, open-source software was typically designed by the programmers who wrote it, with some usability groups trying to tidy up after. Here we moved to doing what Apple is successful in doing, by designing the software and user interface first. We replaced the clunky installers common on Linux distributions with one that let you preview the desktop before you install.
Support and adoption
Showing off the new installer at LinuxTag 2006 in Germany, Shuttleworth wore a KDE T-shirt and announced commercial support for Kubuntu. The project succeeded in changing Canonical's single desktop approach into a dual-desktop approach; now Kubuntu just had to get the world to follow.
![Mark Shuttleworth [Shuttleworth KDE T-shirt]](https://static.lwn.net/images/2015/shuttleworth-kde-sm.jpg)
The world started to follow in 2007, when I met the people who rolled out Kubuntu in schools in the country of Georgia. That summer I was invited to Tenerife, where the government of the Canary Islands had also used it in its schools and the university had started basing its teaching around Kubuntu. Later, I got invited to Kano in northern Nigeria to talk to government ministers and run a conference about the advantages of open source. I like to say that success in free software really does let you feel like an international freedom fighter.
The KDE 4 release event was held at Google's offices in California in 2008. There we hoped to show off to the world how KDE could not just match the proprietary competition but surpass it. Kubuntu made a dual release to allow it to cope with the features still not ported to the new Plasma 4 desktop; one with KDE 3 and one with Plasma 4. The release of KDE 4 has been remembered as a failed launch, one which lost KDE a lot of users, but the project was ambitious at the time and that release laid the groundwork for a desktop that could compete with the innovation that was happening elsewhere.
Differing directions
Canonical was also searching for answers to the problem of creating a product for people who didn't care about operating systems. Its response was to move further into doing what open source has always been poor at: design-led software. Canonical hired a team of designers, people with expertise in usability, art, and the psychology of devices, to work from an office in London. I got a phone call from Shuttleworth asking if KDE wanted to be part of the new development. Naturally I said yes; I didn't want Kubuntu to be left behind.
A couple of KDE developers were soon hired along to work on the designs coming out of Canonical's London office. One of the first designs was to change the notifications system to be ephemeral and not to include actions, a design which stood in contrast to what KDE and most desktops did at the time. At the UDS that followed, a heated debate erupted where long-term Kubuntu contributors Scott Kitterman and Celeste Lyn Paul told Shuttleworth that replacing functionality of the KDE desktop would destroy much of the credit for cooperation that had been built up over the years between KDE and Kubuntu.
An agreement was worked out where the Canonical-developed features would be committed to upstream KDE before being accepted into Kubuntu. This preserved the connection between Kubuntu and KDE while allowing the project to benefit from the work Canonical was funding. This joint relationship ended when Canonical decided to impose copyright assignment requirements and unilaterally moved development away from the KDE infrastructure.
![Kubuntu and KDE developers [Barcelona UDS]](https://static.lwn.net/images/2015/kubuntu-kde-devs-sm.jpg)
The Kubuntu team had grown strong and confident with successful releases over the years. Some of the team countered the move away from community-made software to that of Canonical-made software by launching Project Timelord, which had a manifesto to work more closely with upstream developers. Translations, which had been forked from the beginning of Ubuntu into Launchpad, were instead taken directly from upstream, bug reports moved upstream except for those specific to packages, and we promised to drop any patches that weren't approved by upstream developers.
Tensions between community and company were finally sidestepped for good in Ubuntu's desktop flavor in 2011 when Canonical dropped GNOME for its own design: Unity. At the time, I said that I agreed with Canonical's move away from community-developed desktop software; nobody has made money from that while we all carry Android phones in our pockets. But it was the start of Canonical also stepping back from supporting Kubuntu.
Canonical steps back
In 2012, a car crash on a tropical island left me with head trauma to recover from, and unfortunately it turned out to be a difficult year to recover in. I received a phone call from my manager at Canonical telling me I wouldn't be able to work on Kubuntu any more and that commercial support would be stopped. As an example of the difficulties Canonical has had working with community, the communication of this change didn't mention what the change actually was. I had to step in and explain it.
During this time, the Kubuntu community had a period of soul searching. Did the world need a KDE-based distribution as part of a project that was increasingly unwilling to work with community-made desktop software? Then it was announced that Ubuntu would move away from X and Wayland, in favor of its own creation, Mir. Could Kubuntu live as part of a project where much of the software it depended on now was not supported by the main developers? At the same time the week-long trips for UDS were replaced with online video meetings. Could Kubuntu work effectively without the developers being able to meet together face-to-face?
During the soul searching, people started telling project members how much they depended on Kubuntu and offering their help. A company called Emerge Open had been founded by Niall McCarthy with ideas from Samba's Dan Shearer to try to put open-source projects together with companies that can make some revenue. It's a non-profit company that is so open it even publishes the salaries of its directors. McCarthy set up a deal with Canonical to allow him to offer a commercial support service for Kubuntu where the proceeds would come back to Kubuntu. Blue Systems, a company run by Clemens Tönnies Jr., a German developer with deep pockets, that supports KDE financially in multiple ways, hired me to keep working on the distribution. And Kubuntu was able to replace meeting at UDS with meetings at KDE's Akademy conference and in the city of Munich, which uses Kubuntu on its computers.
At KDE's Akademy conference last year the project worked out the details for a closer collaboration with Debian. Kubuntu and Debian now share packaging repositories to replace the inefficient manual merging method Ubuntu has always used. Harald Sitter has also set up Kubuntu CI, a continuous integration of KDE sources with Kubuntu packaging that produces packages ready for testing as soon as changes are made. In addition, the new weekly ISO images have been vital to the Plasma team for testing Plasma 5. The sprint in Munich last autumn let us see how the city is saving millions of euros by switching the council's computers to free software. The sprint also let us collaborate with projects like Kolab and LibreOffice to ensure a complete desktop experience.
Ten years
As we approach the tenth anniversary, Kubuntu can show many of the successes as well as many of the challenges of taking free software into the hands of users. It is now only one of many desktop flavors of Ubuntu and there are no signs of taking over the world yet, but we continue to have fun producing ever-improving software that is innovative and helps people who in turn help the project. Kubuntu gets used throughout the world, including in the world's largest desktop deployment in Brazil. The whole Kubuntu team is deeply integrated with KDE; I'm now the release manager for Plasma as well as Kubuntu.
The previews of Kubuntu 15.04, which will be the first distribution to change to Plasma 5, are receiving great reviews. The support of the project, including all the infrastructure from Canonical and the rest of the Ubuntu community as well as the international travel, even if not so often by private jet these days, shows the importance many people place in the project's continued existence. We always welcome new helpers on our IRC channel to join the team. Please do say "hi" and help change the world .
Brief items
Distribution quote of the week
Fedora 22 Alpha released
The Fedora Project has announced the release of Fedora 22 Alpha. "The Alpha release contains all the exciting features of Fedora 22's editions in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete and bears a very strong resemblance to the third and final release. The final release of Fedora 22 is expected in May."
Distribution News
Debian GNU/Linux
Three Debian technical committee appointments
Debian project leader Lucas Nussbaum has confirmed the appointment of three new members to the Debian technical committee. The new members are Didier Raboud, Tollef Fog Heen, and Sam Hartman; they will be replacing Ian Jackson, Russ Allbery, and Colin Watson.Status on the Jessie release
The Debian release team has a status report on Jessie (Debian 8.0). An April release is possible "*however*, it implies that we all roll up our sleeves and squash those remaining bugs."
Debian Bug Squashing Party in Salzburg/Austria
There will be a Debian BSP in Salzburg, Austria, April 17-19.
Ubuntu family
Vivid will switch to booting with systemd
Ubuntu's next release, Vivid Vervet (15.04), will use systemd by default. "Technically, this will flip around the preferred dependency of "init" to "systemd-sysv | upstart", which will affect new installs, but not upgrades. Upgrades will be switched by adding "systemd-sysv" to ubuntu-standard's dependencies."
Newsletters and articles of interest
Distribution newsletters
- Debian Misc Developer News (#38) (March 10)
- DistroWatch Weekly, Issue 600 (March 9)
- 5 things in Fedora this week (March 6)
- Gentoo Monthly Newsletter (February)
- Tails report (January and February)
- Ubuntu Weekly Newsletter, Issue 407 (March 8)
Page editor: Rebecca Sobol
Development
Building HTTP/2 services with gRPC
Google recently released a new remote procedure call (RPC) framework that is designed to supplant traditional representational state transfer (REST) in the development of web applications. Called gRPC (which the FAQ defines as "gRPC Remote Procedure Calls"), the new framework takes advantage of several features in the recently approved HTTP/2 standard that the project claims will result in better performance and greater flexibility than REST APIs. Implementations are available—for both client- and server-side code—in a variety of programming languages.The basic idea behind gRPC is quite similar to that used by the RESTful services that make up so many web applications today. The methods of the server application are accessible to clients through a well-defined set of HTTP requests. The server answers each of these calls with an HTTP response (or, if necessary, an error code). But gRPC dispenses with HTTP 1.1, relying strictly on HTTP/2—and thus, the project claims, gaining considerable performance.
gRPC was announced in a February 26 blog
post. The post described gRPC as enabling "highly
performant, scalable APIs and microservices
" and said that
Google is starting to expose gRPC endpoints to its own services. It
also said that other companies have been involved in the creation of
gRPC, naming mobile-payment processor Square as one example. Square
posted an
account of its own that explains some of the background. Both of
the posts highlight a few key features that are enabled by
building on top of HTTP/2—namely, bidirectional streaming,
multiplexed connections, flow control, and HTTP header compression.
Those improvements, of course, would be available to any RPC framework using HTTP/2. As for gRPC itself, it is built on top of Google's protocol buffers, which was updated to version 3.0 (a.k.a. "proto3") in conjunction with the gRPC release. Proto3 is largely a stripped-down version of proto2: multiple fields have been removed, including several that were required in proto2 or that had required default values, and the extension mechanism has been replaced with a simple Any standard type that can hold. Proto3 also adds standard types for times and dates and for dynamic data, plus bindings for Ruby and JavaNano (an Android-centric Java library optimized for low-resource systems).
gRPC uses protocol buffers to serialize and deserialize its over-the-wire messages. Employing that technique leverages HTTP/2's binary wire format, allowing for more efficient RPC traffic than HTTP 1.1's ASCII encoding. But gRPC also uses proto3's interface definition language (IDL) to specify the structure and the content that a web service's RPC messages will take.
To get started, the application developer writes a schema for the new web service, defining each of the requests and the possible responses that the service will use. The protocol buffers compiler (protoc) can then be used to generate classes and stub code (for client and server) from the schema, in any of the supported languages. At launch time, the available languages were Python, C++, C#, Objective-C, PHP, Ruby, Go, Java (both in generic and Android flavors), and JavaScript tailored to use with Node.js.
Quick-start guides are available for most of the languages (C#, Objective-C, and PHP guides are absent at the moment). Each of these tutorials begins with the same basic service definition:
syntax = "proto2"; // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; }
One side point is worth noting: for reasons that are not explained, the project's Python tutorial does, indeed, specify proto2 syntax rather than proto3. For the simple "Hello World" example used, proto2 and proto3 syntax are identical (which is visible when comparing it with the Java and Go tutorials), but the generated code does leave some artifacts behind, in the _pb2 portion of the module name.
The output language of protoc compiler is specified with a command-line switch. Using --python_out, for example, generates a helloworld_pb2 module, a stub function called SayHello() for the client, and a Greeter class for the server that includes a stub function to respond to the SayHello() message as well as a simple serve() function that waits for incoming messages.
By using an abstract IDL, the argument goes, developers can quickly generate client code for a variety of different platforms, both desktop- and mobile-oriented. gRPC also makes it easy to include a version number in each message-format definition, so that clients and servers can support interoperability across versions.
The project has a detailed description of how gRPC's messages are sent in HTTP/2 frames. This is where some of HTTP/2's other advantages over HTTP 1.1 come into play.
For example, whenever an endpoint (client or server) sends a message, it can choose to keep the stream open, so that it can be used for bi-directional communication for the remainder of the session. gRPC uses HTTP/2's stream IDs as its internal identifiers for RPC calls, so the mapping between the RPC call and its associated stream comes for free. Multiplexing several open streams over one connection is another way that HTTP/2 enables more efficient use of the available bandwidth; it is up to the developer to determine how many active streams make sense for the service.
gRPC also piggybacks on HTTP/2 for its error codes. That allows the application to pick up on HTTP/2 error conditions (such as one endpoint of the stream throttling the connection) with little additional effort. Finally, gRPC is designed to support pluggable authentication methods; TLS 1.2 (or greater) and OAuth 2.0 are supported so far.
Where gRPC heads from here, naturally, remains to be seen. It is certainly a straightforward enough framework to warrant closer examination, and there will, no doubt, be many developers interested in seeing how they can take advantage of HTTP/2's promised improvements over HTTP 1.1.
That said, HTTP/2 is still in its infancy—one should expect to see a wide array of new frameworks announced in the next few years, all claiming to leverage the new protocol for a range of enhanced features. gRPC may be one of the first, but only time will tell how many developers outside of Google and its partners find it a good fit.
Brief items
Quotes of the week
In the 1990s, I was excited about the future, and I dreamed of a world where everyone would install GPG. Now I'm still excited about the future, but I dream of a world where I can uninstall it.
Samba 4.2.0 released
The Samba team has announced the first release in the new stable 4.2.x series. This release adds transparent file compression, access to "Snapper" snapshots via the Windows Explorer "previous versions" dialog, better clustering support, and much more. This release also marks the end of support for Samba 3.Sphinx 1.3 released
Version 1.3 of the Sphinx documentation system has been released. Support for Python 3.4 has been added, while support for Python 2.5, 3.1, and 3.2 was dropped. Among the new features are several new themes, a builder for generating Apple Help output, and an extension for NumPy and Google-style docstrings.
xf86-input-libinput 0.8.0 available
Version 0.8.0 of xf86-input-libinput has been released. The update fixes excess scroll-speed problems on touchpads as well as a driver crash, and adds a configuration option for choosing between different click methods. Notably, the update relies on version 0.11 of the libinput library.Mailpile: Beta Rejected!
The Mailpile project announced that, after processing the feedback
that resulted from its first public beta release, the team has pulled
back the release so that it can rethink things and continue
development. Among the highlighted issues were IMAP and GnuPG
support, but "there are a very large number of other smaller
bugs and loose ends that need work. Almost all of these are in the
"back end" of the app, the low level plumbing, gears and turbines and
closures and APIs that work behind the scenes. The back-end has been
playing catch up to the user interface for a while, it needs some
focus and attention before we can ship a real product.
" We took a look at the beta release in
September 2014.
vdpauinfo 1.0 available
Version 1.0 of vdpauinfo has been released. Vdpauinfo is a command-line utility that can be used to query the hardware-acceleration capabilities of Video Decode and Presentation API for Unix (VDPAU) video devices. New in 1.0 is support for querying the latest set of H.265 profiles.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (March 5)
- What's cooking in git.git (March 6)
- LLVM Weekly (March 9)
- OCaml Weekly News (March 10)
- OpenStack Community Weekly Newsletter (March 6)
- Perl Weekly (March 9)
- PostgreSQL Weekly News (March 8)
- Python Weekly (March 5)
- Ruby Weekly (March 5)
- This Week in Rust (March 9)
- Tor Weekly News (March 11)
- Wikimedia Tech News (March 9)
Edmundson: High DPI Progress
At his blog, David Edmundson writes
about the state of high-DPI support in KDE. "For some
applications supporting high DPI has been easy. It is a single one
line in KWrite, and suddenly all icons look spot on with no
regressions. For applications such as Dolphin which do a lot more
graphical tasks, this has not been so trivial. There are a lot of
images involved, and a lot of complicated code around caching these
which conflicts with the high resolution support without some further
work.
" He is personally tracking
the progress of many applications, but notes that there are many
unsolved issues. "There are still many applications without a frameworks release even in the upcoming 15.04 applications release. Even in the next applications release in 15.08 August we are still unlikely to see a released PIM stack.
Is it a good idea to add an option into our UIs that improves some applications at the cost of consistency? It's not an easy answer.
"
This update is Edmunsdon's second post on the subject; the first, from
November 2014, is also quite informative.
Morevna: How do we deal with spoilers?
The Morevna project is
developing an open-source animated series using a suite of open-source
graphics applications. But working in the opening poses an
interesting question: how does one cope with spoilers leaking out to
fans of the project? At the Morevna blog, Konstantin Dmitriev
explains
the process chosen by the project. "The content that classified
as containing spoilers is published on the website, but hidden
(locked) from the public view. When the new episode is released, all
hidden content related to it becomes publicly visible. On our Patreon
page we have introduced a new category of supporters – Premium
Patrons. Premium Patrons have a special privilege for an early access
to the hidden content before the release.
" Morevna only
decided to move away from a feature-length production toward an
episodic framework in January 2015, so it will be interesting to see
how the new plan plays out.
(Hat tip to Paul Wise.)
Page editor: Nathan Willis
Announcements
Brief items
VMware update to GPL-enforcement suit
VMware has published a statement on the lawsuit filed by Christoph Hellwig alleging copyright infringement. "On March 5, 2015, Software Freedom Conservancy (SFC) announced a lawsuit in Germany, filed by Christoph Hellwig against VMware, alleging a failure to comply with the General Public License (GPL). We believe the lawsuit is without merit, and we are disappointed that the SFC and plaintiff have resorted to litigation given the considerable efforts we have made to understand and address their concerns. We see huge value in supporting multiple development methodologies, including free and open source software, and we appreciate the crucial role of free and open source software in the data center. In particular, VMware devotes significant effort supporting customer usage of Linux and F/OSS based software stacks and workloads." LWN recently covered the lawsuit. (Thanks to Emmanuel Seyman)
News from the FSF
The Free Software Foundation has issued a statement supporting the suit against VMWare. "Unfortunately, VMware has broken this promise by not releasing the source code for the version of the operating system kernel they distribute with their ESXi software. Now, after [many years][3] of trying to work with VMware amicably, the Software Freedom Conservancy and Hellwig have sought the help of German courts to resolve the matter. While the Free Software Foundation (FSF) is not directly involved in the suit, we support the effort."
The FSF has also announced a new web portal, my.fsf.org, for its members and for non-member donations.
Calls for Presentations
GNU Tools Cauldron 2015 - Call for Abstracts and Participation
GNU Tools Cauldron will take place August 7-9 in Prague, Czech Republic. "The purpose of this workshop is to gather all GNU tools developers, discuss current/future work, coordinate efforts, exchange reports on ongoing efforts, discuss development plans for the next 12 months, developer tutorials and any other related discussions." The call for participation deadline is April 30.
CFP Deadlines: March 12, 2015 to May 11, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
March 15 | May 7 May 9 |
Linuxwochen Wien 2015 | Wien, Austria |
March 15 | May 16 May 17 |
MiniDebConf Bucharest 2015 | Bucharest, Romania |
March 31 | July 25 July 31 |
Akademy 2015 | A Coruña, Spain |
March 31 | May 4 May 5 |
CoreOS Fest | San Francisco, CA, USA |
April 3 | May 2 May 3 |
Kolab Summit 2015 | The Hague, Netherlands |
April 4 | May 30 May 31 |
Linuxwochen Linz 2015 | Linz, Austria |
April 6 | May 20 May 22 |
SciPy Latin America 2015 | Posadas, Misiones, Argentina |
April 14 | April 14 April 15 |
Palmetto Open Source Software Conference | Columbia, SC, USA |
April 15 | June 12 June 14 |
Southeast Linux Fest | Charlotte, NC, USA |
April 17 | June 11 June 12 |
infoShare 2015 | Gdańsk, Poland |
April 28 | July 20 July 26 |
EuroPython 2015 | Bilbao, Spain |
April 30 | August 7 August 9 |
GNU Tools Cauldron 2015 | Prague, Czech Republic |
May 1 | August 17 August 19 |
LinuxCon North America | Seattle, WA, USA |
May 1 | September 10 September 13 |
International Conference on Open Source Software Computing 2015 | Amman, Jordan |
May 1 | August 19 August 21 |
KVM Forum 2015 | Seattle, WA, USA |
May 1 | August 19 August 21 |
Linux Plumbers Conference | Seattle, WA, USA |
May 2 | August 12 August 15 |
Flock | Rochester, New York, USA |
May 3 | August 7 August 9 |
GUADEC | Gothenburg, Sweden |
May 3 | May 23 May 24 |
Debian/Ubuntu Community Conference Italia - 2015 | Milan, Italy |
May 8 | July 31 August 4 |
PyCon Australia 2015 | Brisbane, Australia |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Kansas Linux Fest
The talks at Kansas Linux Fest have been announced. "There will be over twenty presenters giving technical presentations and hands-on workshops throughout the conference. Presenters include Dave Lester, Twitter's open source advocate, Frank Wiles, Revolution Systems, and Hal Gottfried, Open Hardware Group Kansas City CCCKC. Alan Robertson of Assimilation Systems will be presenting on an open source network security system. Oracle's MySQL community manager, Dave Stokes, will be presenting two technical talks on MySQL, a popular relational database. Researchers from KU and K-State and Wichita State University will be presenting as well as Linux User Groups in Wichita and Omaha. Presentations on mobile phone security and open source phone hardware as well as system and cloud security are planned." Kansas Linux Fest will take place March 21-22 in Lawrence, Kansas.
LibrePlanet free software conference
The Free Software Foundation (FSF) and MIT's Student Information Processing Board (SIPB) have teamed up to bring LibrePlanet to Cambridge, MA, March 21-22. "LibrePlanet is an annual conference for people who care about their digital freedoms, bringing together software developers, policy experts, activists, and computer users to learn skills, share accomplishments, and face challenges facing the free software movement. LibrePlanet 2015 will feature programming for all ages and experience levels."
Events: March 12, 2015 to May 11, 2015
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
March 9 March 12 |
FOSS4G North America | San Francisco, CA, USA |
March 11 March 12 |
Vault Linux Storage and Filesystems Conference | Boston, MA, USA |
March 12 March 14 |
Studencki Festiwal Informatyczny / Academic IT Festival | Cracow, Poland |
March 13 March 15 |
FOSSASIA | Singapore |
March 13 March 15 |
GStreamer Hackfest 2015 | London, UK |
March 16 March 17 |
SREcon15 | Santa Clara, CA, USA |
March 17 March 19 |
OpenPOWER Summit | San Jose, CA, USA |
March 21 March 22 |
LibrePlanet 2015 | Cambridge, MA, USA |
March 21 March 22 |
Kansas Linux Fest | Lawrence, Kansas, USA |
March 23 March 25 |
Android Builders Summit | San Jose, CA, USA |
March 23 March 25 |
Embedded Linux Conference | San Jose, CA, USA |
March 24 March 26 |
FLOSSUK DevOps Conference | York, UK |
March 25 March 27 |
PGConf US 2015 | New York City, NY, USA |
March 26 | Enlightenment Developers Day North America | Mountain View, CA, USA |
March 28 March 29 |
Journées du Logiciel Libre | Lyon, France |
April 9 April 12 |
Linux Audio Conference | Mainz, Germany |
April 10 April 12 |
PyCon North America 2015 | Montreal, Canada |
April 11 April 12 |
Lyon mini-DebConf 2015 | Lyon, France |
April 13 April 17 |
SEA Conference | Boulder, CO, USA |
April 13 April 17 |
ApacheCon North America | Austin, TX, USA |
April 13 April 14 |
AdaCamp Montreal | Montreal, Quebec, Canada |
April 13 April 14 |
2015 European LLVM Conference | London, UK |
April 14 April 15 |
Palmetto Open Source Software Conference | Columbia, SC, USA |
April 16 April 17 |
Global Conference on Cyberspace | The Hague, Netherlands |
April 17 April 19 |
Dni Wolnego Oprogramowania / The Open Source Days | Bielsko-Biała, Poland |
April 21 | pgDay Paris | Paris, France |
April 21 April 23 |
Open Source Data Center Conference | Berlin, Germany |
April 23 | Open Source Day | Warsaw, Poland |
April 24 | Puppet Camp Berlin 2015 | Berlin, Germany |
April 24 April 25 |
Grazer Linuxtage | Graz, Austria |
April 25 April 26 |
LinuxFest Northwest | Bellingham, WA, USA |
April 29 May 2 |
Libre Graphics Meeting 2015 | Toronto, Canada |
May 1 May 4 |
openSUSE Conference | The Hague, Netherlands |
May 2 May 3 |
Kolab Summit 2015 | The Hague, Netherlands |
May 4 May 5 |
CoreOS Fest | San Francisco, CA, USA |
May 6 May 8 |
German Perl Workshop 2015 | Dresden, Germany |
May 7 May 9 |
Linuxwochen Wien 2015 | Wien, Austria |
May 8 May 10 |
Open Source Developers' Conference Nordic | Oslo, Norway |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol