DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The 2025 Kubernetes Trend Report is here: Discover the leading trends in AI integration, tool sprawl reduction, and developer productivity.

Databases are evolving fast. Share your insights in DZone’s 2025 Database Systems Survey!

Cut DevOps complexity with Golden Paths + IDPs. See how to boost developer velocity and simplify delivery in our live webinar.

Related

  • Development of System Configuration Management: Summary and Reflections
  • Development of System Configuration Management: Handling Exclusive Configurations and Associated Templates
  • Development of System Configuration Management: Introduction
  • Development of System Configuration Management: Building the CLI and API

Trending

  • LLMs for Debugging Code
  • Basic Security Setup for Startups
  • Predictable Low Latencies for Apache HBase
  • Running AI/ML on Kubernetes: From Prototype to Production — Use MLflow, KServe, and vLLM on Kubernetes to Ship Models With Confidence
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. Development of System Configuration Management: Performance Considerations

Development of System Configuration Management: Performance Considerations

Integrated caching in Consul greatly improved SCM config delivery speed, using goroutines and checksums to reduce load, though deployment latency initially increased.

By 
Georgii Kashintsev user avatar
Georgii Kashintsev
·
Alexander Agrytskov user avatar
Alexander Agrytskov
·
Sep. 17, 25 · Analysis
Likes (1)
Comment
Save
Tweet
Share
1.2K Views

Join the DZone community and get the full member experience.

Join For Free

Series Overview

This article is Part 3 of a multi-part series: "Development of system configuration management."

The complete series:

  1. Introduction
  2. Migration end evolution
    1. Working with secrets, IaC, and deserializing data in Go
    2. Building the CLI and API
    3. Handling exclusive configurations and associated templates
  3. Performance consideration
  4. Summary and reflections

Performance

Caches

The first performance issue arose when we integrated the new SCM on a ClickHouse cluster with 75 hosts. These hosts request configuration every 10 seconds. However, building the complete configuration takes at least 5 seconds. This delay is due to the need to query our inventory service, read two configuration files from the local filesystem, and request data from Vault and Consul to prepare the response. Although these operations occur in parallel, they consume a significant amount of CPU. As a result, the API was overloaded with requests. We didn't want to decrease performance or slow down the update process for host configurations. This could lead to decreased deployment speeds, unsynchronized updates, and potential performance degradation in the future as new hosts come under the control of the new SCM.

Introduced caches at the Consul level significantly improve this situation and are connected to the new process of configuration generation. The API scans all declarative configurations for hostgroups. In this process, it builds the configuration similarly to the previous method, but instead of responding to the API, it stores the configuration in Consul. We prepared a path called 'generated/' in Consul with a TTL of 1 hour for the generated configurations. When agents request the API, they retrieve the prepared configuration from Consul. This improves speed, allowing us to use only two API servers to provide configurations for more than 1000 agents, which is sufficient for our performance needs.

However, the overall deployment time for configurations was degraded; previously, configuration changes could be made in ten seconds, but after introducing caching, it took up to 5 minutes. This delay was due to the sequential building of configurations for each hostgroup. To avoid this issue, we implemented goroutines.

The caching scheme underwent several changes over time, with more details provided in the section.

Goroutines

The loop that reads the configuration files for hostgroups now creates a goroutine for each hostgroup. Each goroutine for a hostgroup calls a new goroutine for each host. While this significantly improves speed, we encountered issues with Consul being overloaded with IOPS due to the many writes to the keys. Each host stored the temporary configuration in Consul and then wrote the final configuration.\

To resolve this, we made two key changes:

  1. We changed the storage for the temporary configuration to local memory on the API hosts, utilizing only sync.Map primitives to allow for simultaneous writes.
  2. We began reading and writing the checksum of the configuration alongside the final configuration. This enables us to check for differences between values; if there are no changes, the API does not write the value.

Overall, IOPS utilization on Consul hosts has decreased significantly, further improving speed.

We also identified another way to enhance configuration building speed by only rebuilding the configuration files for hostgroups that had changed. While this approach seemed obvious, it leads to a new problem: configurations could change in both Consul and Vault without the agent being aware of it. Consul has a subscription mechanism for changes, but Vault does not.

Issues With Goroutines and Cache Scheme

The first problem we encountered was increased complexity in identifying errors in configurations. Previously, the most recent configuration sample would respond to the agents. Now, if a user pushes a hostgroup.yaml file, the API may respond to agents with outdated configuration for up to one hour. Metrics and logs enhance visibility, allowing us to identify issues with specific hostgroup configurations more quickly. To further improve visibility, we added a field that indicates when the last configuration was built, enhancing observability for the administrators.

We also faced some issues that are not directly related to the cache and goroutine scheme.
The Go HTTP client does not treat 4xx responses from servers as errors. On two occasions, we encountered issues where 400 responses from our inventory system, SCM, were erroneously considered valid, leading to the generation of incorrect configurations. For instance, the following code snippet:

Go
 
hostsList, httpCode, err := HTTPRequest("GET", URL, "")
if err != nil {
    return
}


In this example, a 400 response code leads to a nil error and an empty response. Due to the empty structure of the host list for some hostgroups, this incorrect configuration was applied to the agents. We have configurations where cluster lists are templates in the configuration files. As a result, some clusters were degraded when hosts were automatically removed from their configurations, compromising the integrity of the clusters.

Only one factor mitigated the impact: agent-level parsers with an implemented health check handler. This handler ensured that the software only lost one server, which locked the configuration in Consul while waiting for the lock to be released. However, some clusters without a health check in the parser implementation before restarting suffered degradation.

We then modified the code to check the response code:

Go
 
hostsList, httpCode, err := HTTPRequest("GET", URL, "")
if err != nil {
    return
}
if httpCode != 200 {
    return
}


However, another problem arose related to returning from the function. If an issue occurred with data sources, we would stop building the configuration for some hostgroups. This posed a challenge when immediate changes were needed, and we couldn't implement them due to temporary network issues or the failure of other services, as rebuilding all configurations from scratch would take more time. Therefore, we revised the code to include a loop that continues until the resource responds:

Go
 
for err != nil || httpCode != 200 {
    hostsList, httpCode, err := HTTPRequest("GET", URL, "")
    if err != nil {
        return
    }
    if httpCode != 200 {
        return
    }
    time.Sleep(1 * time.Second)
}


This code quickly identifies the opportunity window for a response, does not halt other work due to not stopping the configuration build, and can wait indefinitely while the resource is unavailable.

How Our Initial Opinions Changed During Development

When we started development, we had certain beliefs about the process that seemed right to us. However, as new people joined the project, our opinions evolved.

First of all, we initially decided to template files only at the agent level. This approach could offload tasks from the centralized API and distribute the load across the entire infrastructure. Now, our new SCM can template configurations at both levels. If we need to include other files in a template, we must use the server where all templates are stored, which is currently limited to the API host. If a template does not require associated templates, the templating process is handled at the agent level.

Secondly, it was considered unnecessary to template the main hostgroup's files. As long as we had a homogeneous configuration within our company, this did not pose any problems and seemed logical. However, we now face the need to support multiple operating systems, and using templates at all levels has addressed this issue as it does in other SCMs.

The third change in our thinking is that each host in a hostgroup does not necessarily need to have an identical configuration. This perspective was influenced by deployment needs and the presence of transitional configurations on different hosts during deployment.

This is not a comprehensive list of our changing opinions, but these are the most relevant to us now. I believe that, in the future, we may also reconsider other aspects of our approaches, which is a normal part of the evolution of any software.

Reuse Our SCM in Other Teams

In our company, we have multiple SRE teams, but the new SCM was developed for just one team. Until now, we haven’t been able to adapt it for others due to the specific requirements of our team's infrastructure. This issue is not with the SCM itself but rather with our architecture. Currently, we are at the beginning of this process and have identified a solution to facilitate this adaptation:

  • Make each manager and many functions public modules
  • Public these modules as core functionality
  • Give teams the opportunity to use core modules or to write themself

This transition shifts our SCM from being a product to being a framework. Currently, two SRE teams in our company are integrating the new SCM into their infrastructure. This serves as an example of how to integrate our SCM. We should start with the structure of the project that imports our SCM code as a module. The project structure for each team will look like this:

Plain Text
 
.
├── README.md
├── cmd
│   └── scm
│       └── scm.go
├── go.mod
├── go.sum
├── init
│   ├── scm-api.service
│   └── scm.service
├── internal

│   ├── merger
│   │   └── merger.go
│   └── parser
│       └── parsers.go


To begin, we should write the main function with the configuration parameters in the file cmd/scm/scm.go:

Go
 
func main() {
    logger.LogInit(os.Getenv("LOG_PATH"))
    conf.SCMOpt()

    logger.LogInit()
    vault.VaultConnect()
    vault.AdtechConnect()

    common.CreateDirIfNoExists(conf.LConf.HomeDir+"/", 0755, "root", "root")
    common.CreateDirIfNoExists(conf.LConf.FactsCacheDir+"/", 0755, "root", "root")

    if conf.LConf.API {
        LockKey := "buildconf_lock"
        crypt.GenCertIfNoExists()

        consul.ConsulClient()

        go common.SignalHandler(LockKey)
        go vault.TokenUpdater()
        go RebuildCache()
        go BuildConf(LockKey)
        go api.ListenReverseProxy()
        prometheus_local.PrometheusHandle(":9901")
        api.Router(GenHostJsonByHG)
    } else if conf.LConf.AGENT {
        crypt.GenCertIfNoExists()

        prometheus_local.PrometheusHandle(":9900")
        consul.ConsulClient()
        agent.RunAgent(parser.HOSTParsers)
    }
}

func RebuildCache() {
    for {
        scheduler.BuildCache(conf.LConf.CMDBApiUrl, conf.LConf.PDNSApiUrl)
        time.Sleep(time.Duration(conf.LConf.ScheduleTimeInSecods) * time.Second)
    }
}

func BuildConf(LockKey string) {
    for {
        err := common.GitUpdate(conf.LConf.FilesDir, "origin", "master")
        if common.SharedLock(LockKey, "1m") {
            consul.ConsulVaultMigrates()

            // create VMs
            common.YamlFileReadCb(conf.LConf.FilesDir+"/hostgroups/", ".yaml", generator.ParseYAMLtoVMcreate)

            // generate configurations
            generator.ScheduleGenerateHostgroupConfigurationForeachIP(GenHostJson)
            common.SharedUnlock(LockKey)
        } else {
            log.Println("Build lock already set")
        }
        time.Sleep(time.Duration(conf.LConf.ScheduleTimeInSecods) * time.Second)
    }
}

// Generates a JSON configuration for scheduler runs.
func GenHostJson(wg *sync.WaitGroup, GenerateTime uint64, IP string, HostgroupName string) {
    wg.Done()
    FullConfig, _ = generator.GenerateStaticConf(HostgroupName, IP, true, "/hostgroups/", GenerateTime, merger.APImergers)
    scheduler.SaveCryptConfig(FullConfig, HostgroupName, IP, GenerateTime)
}

// Generates a JSON configuration based on a specified hostgroup for API queries.
func GenHostJsonByHG(Hostgroup string) () {
    t := time.Now()
    GenerateTime, _ := strconv.ParseUint(t.Format("20060102150405"), 10, 64)
    common.GitUpdate(conf.LConf.FilesDir, "origin", "master")
    Resp, err := consul.ConsulGetListByHG(Hostgroup)

    if err != nil {
        return
    }

    HgList := Resp["hg_list"].([]interface{})
    for _, Opt := range HgList {
        OptMap := Opt.(map[string]interface{})
        IP := OptMap["ip"].(string)

        if IP == "" {
            continue
        }

        FullConfig, err = generator.GenerateStaticConf(Hostgroup, IP, true, "/hostgroups/", GenerateTime, merger.APImergers)
        if err == nil {
            scheduler.SaveCryptConfig(FullConfig, Hostgroup, IP, GenerateTime)
        }
    }
}


Despite the core functionality, each function can be redefined by the team and implemented in a custom way. A common approach is to redefine the managers for parsers and mergers. For example, parsers.go will have the following content:

Go
 
import extManagers "core/managers"
import "internal/managers"
...

func HOSTParsers(source string, agentFullInfo resources.FullInfo) (map[string]interface{}, error) {
    var ApiResponse map[string]interface{} // interface
    ClientResponse := map[string]interface{}{}

    err := mergo.Merge(&ApiResponse, conf.LConf.AgentData)
    if err != nil {
        logger.FilesLog.Println("Mergo error:", err)
        return ClientResponse, errors.New("Mergo error:" + err.Error())
    }

    err = common.JSONUnmarshal([]byte(source), &ApiResponse)
    if err != nil {
        return ClientResponse, errors.New("json unmarshal failed")
    }

    err = common.JSONUnmarshal([]byte(source), &agentFullInfo.ResponseFromApi)
    if err != nil {
        logger.FilesLog.Println("Error while unmarshal agentFullInfo")
    }

    if ApiResponse == nil {
        return ClientResponse, errors.New("ApiResponse is nil")
    }

    if ApiResponse["immutable"] != nil {
        Immutable := ApiResponse["immutable"].(bool)
        if Immutable {
            if common.GetFlag("fullconf") {
                log.Println("Host is immutable and full configured")
                return ClientResponse, nil
            }
        }
    }

    preparsed := common.FilesPreparse(ApiResponse)

    common.FilesParser(ApiResponse, preparsed, "tops")
    common.PackageParser(ApiResponse, ClientResponse)
    common.DebParser(ApiResponse, ClientResponse)
    common.PipParser(ApiResponse, ClientResponse)
    common.SaltstackSettingUP(ApiResponse)
    common.PartitionParser(agentFullInfo, ApiResponse)
    common.Partition2Parser(agentFullInfo, ApiResponse, ClientResponse)
    common.GroupParser(ApiResponse, ClientResponse)
    common.UserParser(ApiResponse, ClientResponse)
    common.FilesParser(ApiResponse ,preparsed, "alls")
    tls.CertificateParser(ApiResponse)
    common.DirectoryParser(ApiResponse, ClientResponse)
    common.ServiceParser(ApiResponse, ClientResponse)
    common.CommandParser(ApiResponse, ClientResponse)
    extManagers.NetworkParser(agentFullInfo, ApiResponse)
    common.GitParser(ApiResponse)
    extManagers.UnboundParser(ApiResponse, ClientResponse)
    extManagers.ClickhouseParser(ApiResponse, ClientResponse)
    extManagers.KafkaParser(ApiResponse, ClientResponse)
    extManagers.ZookeeperParser(ApiResponse, ClientResponse)
    extManagers.ElasticsearchParser(ApiResponse, ClientResponse)
    extManagers.DummyParser(ApiResponse, ClientResponse)
    common.HtpasswdParser(ApiResponse, ClientResponse)

    common.SetFlag("fullconf")
    return ClientResponse, nil
}


Additionally, mergers.go will include the following template:

Go
 
import extManagers "core/managers"
import "internal/managers"
...

func APISourcesMerger(ApiResponse map[string]interface{}) (error) {
    err := APISourcesParser(ApiResponse)
    if err != nil {
        return err
    }
    return nil
}

func APISourcesParser(ApiResponse map[string]interface{}) (error) {
    if ApiResponse["sources"] == nil {
        return nil
    }

    Sources := ApiResponse["sources"].(map[string]interface{})
    if Sources["vault"] != nil {
        Vault := Sources["vault"].([]interface{})
        VaultLen := len(Vault)
        for i := 0; i < VaultLen; i++ {
            VaultArrObj := Vault[i].(map[string]interface{})
            Path := ""
            if VaultArrObj["path"] == nil {
                continue
            }
            Path = VaultArrObj["path"].(string)

            Json := ""
            if VaultArrObj["json"] != nil {
                Json = VaultArrObj["json"].(string)
            }

            err := vault.VaultLoad(ApiResponse, Path, Json)
            if err != nil {
                return err
            }
        }
    }
    return nil
}

func APImergers(ApiResponse map[string]interface{}) error {
    if ApiResponse["facts"] == nil {
        return fmt.Errorf("there isn't facts in ApiResponse for APImergers")
    }
    agentFacts := resources.AgentFacts{}
    err := mapstructure.WeakDecode(ApiResponse["facts"], &agentFacts)
    if err != nil {
        return fmt.Errorf("can't parse facts from ApiResponse in APImergers: %v", err)
    }

    err = common.PwMerger(ApiResponse)
    if err != nil {
        return err
    }

    err = extManagers.ClickhousePwParser(ApiResponse)
    if err != nil {
        return err
    }

    err = common.HtpasswdMerger(ApiResponse)
    if err != nil {
        return err
    }

    extManagers.OpendkimMerger(ApiResponse)

    tls.CertificateStaticMerger(ApiResponse)
    tls.CertificateApiMerger(ApiResponse)

    // custom mergers specific to the team (vault, consul, etcd, s3, and other components to enrich the final JSON)
    err = APISourcesMerger(ApiResponse)
    if err != nil {
        return err
    }

    err = extMerger.APIMappingMerger(ApiResponse)
    if err != nil {
        logger.GenConfLog.Println("APIMapping error:", err)
        return err
    }

    hostgroup, err := common.GetStringFromMap(ApiResponse, "Hostgroup")
    if err != nil {
        logger.GenConfLog.Println("APIMapping cannot get hostgroup error:", err)
        return err
    }

    common.DirectoryMerger(ApiResponse)
    extManagers.DockerMerger(ApiResponse, agentFacts)
    extManagers.AlligatorMerger(ApiResponse, agentFacts)
    extManagers.AerospikeMerger(ApiResponse)
    extManagers.Mi6Merger(ApiResponse)
    extManagers.NetworkMerger(ApiResponse, agentFacts)
    common.UserMerger(ApiResponse)
    common.UserArrayMerger(ApiResponse)
    generator.PrometheusMergeExporters(ApiResponse)
    extManagers.UnboundMerger(ApiResponse)
    extManagers.WazuhMerger(ApiResponse, agentFacts)
    extManagers.AuditdMerger(ApiResponse, agentFacts)
    extManagers.DummyMerger(ApiResponse)
    common.DeployTokenParser(ApiResponse)
    common.ServiceMerger(ApiResponse)
    common.OverrideMerger(ApiResponse)
    extManagers.ClickhouseMerger(ApiResponse)
    extManagers.KafkaMerger(ApiResponse)
    extManagers.ZookeeperMerger(ApiResponse)
    extManagers.ElasticsearchMerger(ApiResponse)
    managers.USSDMerger(ApiResponse)

    common.APIFileLoad(hostgroup, ApiResponse)

    return nil


In this example, we have two types of parsers: core parsers and custom parsers. Teams can combine these to select the best way to generate configurations. If something is relevant to them, they can either start using it as is or rewrite it to better suit their needs. As a result, we have flexible control over the usage of different managers, similar to SaltStack formulas or Ansible playbooks. Additionally, we can implement this in Golang using a large open-source library and SCM core functions.

Data Providers as Modules

It is worth acknowledging that not everything is perfect. Currently, we have two sources of data and two integrations:

  • Inventory system
  • HashiCorp Vault
  • Our own Prometheus alert API implementation
  • Our own cloud implementation

This is one reason why this SCM cannot be used universally. While HashiCorp Vault is a leading database for storing secrets, the other aspects depend on our development. For instance, if we want to release our SCM as open source, we will also need to open source our other integrations. Many open-source products have already faced this problem and found a solution: they use data providers as pluggable modules.

For example, integration could occur through code. We can utilize the go build -buildmode=plugin command, allowing the plugin module to implement the dlopen interface.
Alternatively, integration can occur through command execution or via HTTP pull or push interfaces with the SCM. There are various methods for creating pluggable modules.

By the way, while using data providers as pluggable modules is beneficial, if you do not plan to open-source this software, why would you need to implement this? Sometimes, many teams within a company use different data providers and wish to utilize your SCM. Nevertheless, this approach can enhance the flexibility of the resulting software, but it also requires extra time to develop the modules and integrate them into the main codebase.

Two or More API Instances

First of all, many APIs can respond to agent requests efficiently. We can scale the number of APIs as widely as we need. However, we encountered some issues while scaling the API related to scheduler processes that build the cache with configurations in Consul.

Simultaneous configuration building can lead to race conditions. For instance, one API might start building earlier but finish later. Consequently, the older configuration could be overwritten in Consul, leading to inconsistencies until the next update.

A locking system can resolve this issue by preventing the duplication of generation from multiple APIs. However, there is another drawback to the locking system: if a host goes down during the configuration build, the lock will persist in Consul for some time, halting all changes until the lock expires. Despite this, we have determined that this is the best solution among the available options.

In the future, if configuration building takes significantly longer, a more effective approach would be to split the assembly of hostgroups across different APIs for configuration building.

Developer and Production Environment

For testing purposes, we want a unified configuration repository for both testing and production. This approach facilitates the migration of hostgroups between production and development APIs to verify new functionalities. To enable this, we will create an object that describes the API URL: configuration_api_url. For each hostgroup, the URL of the SCM agent is specified by this parameter and can be modified at runtime. The API knows its own URL, for example:

  • scm-prod.example.com
  • scm-dev.example.com

If the API uses the key scm-prod.example.com, but the hostgroup specifies configuration_api_url with scm-dev.example.com, the generation of such a configuration will be skipped. The default configuration (default.yaml) utilizes scm-prod.example.com, indicating that all hosts must connect to the production SCM API. If {hostgroup}.yaml specifies the parameter configuration_api_url: scm-dev.example.com, the agent will change the URL to reflect the configuration changes accordingly, and the SCM API will halt the generation of the configuration.

YAML
 
func GenerateConf(Hostgroup string, IP string, NeedDef bool, Hostgroupsdir string, GenerateTime uint64) (string, error) {
    ...
    // read default.yaml
    // merge with group.yaml

    if GroupConf.ConfigurationApiUrl != Conf.MyUrl {
        return
    }

    // others merges
    ...
    return string(FullConf), nil
}


One push to Git is sufficient to transform the production environment into the testing environment of the SCM for developing new features in Go.

Tests

Currently, only manual tests are available in the new SCM. We use a development cluster with a host that can be deployed with any new codebase of the new SCM before we deploy to production.

There are several challenges associated with using automated tests for SCM functionality:

  • Many testable functions resemble the following example:
YAML
 
func FileAbsent(name string) {
    _, err := os.Stat(name)
    if err != nil {
        return
    }

    err = os.Remove(name)
    if err != nil {
        logger.Println("Error while deleting filename: " + name + ": ", err)
    }
}


This type of function is merely a wrapper around the standard library of Go, and many functions follow this pattern. Such functions do not require tests, as this complicates maintenance of test code, wastes time on unnecessary tests, and distracts from our main issues.

  • Many operating system interfaces are not idempotent.

Some operations can only be performed once, necessitating a rollback to the previous state afterward. This includes tasks such as working with partitions, deleting files, and sending signals. Many of these operations involve low-level OS interfaces that are quite inconvenient to manage.  Low-level interfaces present one of many challenges in developing the SCM. Often, we utilize shell commands to interact with the OS and parse stdout/stderr to make decisions. Our SCM employs the following bash commands:

  • yum
  • rpm
  • lsblk
  • lvresize
  • lvdisplay
  • vgcreate
  • lvcreate
  • mdadm
  • mount
  • mkfs.*
  • cryptsetup

Parsing stdout line-by-line is a tedious method you may encounter when developing your own SCM. While many programming languages offer various wrappers for such tasks, they are not universal and may not cover all use cases.

Non-idempotency presents another challenge. For instance, when creating partitions, you must track each step:

  • Check the labels of partitions to remember what has been done based on the configuration context.
  • If the last layer of the block device has been created and the filesystem (FS) is specified, use mkfs to format it.
  • Maintain the correct order for creating logical partitions (e.g., MD over LVM, LVM over MD, crypto over MD or LVM, or a simple FS over a GPT partition).
API Configuration management Supply chain management Performance

Opinions expressed by DZone contributors are their own.

Related

  • Development of System Configuration Management: Summary and Reflections
  • Development of System Configuration Management: Handling Exclusive Configurations and Associated Templates
  • Development of System Configuration Management: Introduction
  • Development of System Configuration Management: Building the CLI and API

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: