memcached
Here are 671 public repositories matching this topic...
-
Updated
Oct 26, 2021 - C#
-
Updated
Nov 5, 2021 - Ruby
-
Updated
Nov 3, 2021 - PHP
-
Updated
Jun 26, 2021 - C#
Currently we don't have any mechanism to limit the maximum number of clients that could be handled simultaneously.
This feature should be designed properly. Here is some clue: https://redis.io/topics/clients#maximum-number-of-clients
-
Updated
Oct 5, 2021 - Go
Seem we need RemoveAll cache with out parameter, or can called by remove everything for invalidate all cache.
For now I use following code
var listPrefix = new List<string>
{
"foo",
"bar",
"another-foo"
};
listPrefix.ForEach(prefix => {
cachingProvider.RemoveByPrefix(prefix);
});
Instead of write above code, we may can write single like below
` cachin
-
Updated
Jul 15, 2021 - C++
-
Updated
Sep 22, 2020 - Shell
-
Updated
Sep 28, 2021 - Java
建议BaseRepository 里增加返回 IQueryable 的Get / GetAsync 的方法
有了之后就可以在service 里面进行拼接LINQ 用于获取到ViewModel中关联其他类的数据
Steps to reproduce:
- I have about 5 millions of items in Redis.
- Call
cache.clear()without arguments.
As a temporary workaround I had to call redis-cli FLUSHALL from terminal.
-
Updated
Aug 2, 2021 - C++
-
Updated
Nov 4, 2021 - Go
-
Updated
Jan 8, 2019 - PHP
-
Updated
May 20, 2021 - C++
-
Updated
Mar 25, 2019 - JavaScript
-
Updated
Sep 9, 2021 - Go
-
Updated
Sep 15, 2021 - Go
Get能否类似post设置request Header,现在貌似没有提供
Improve this page
Add a description, image, and links to the memcached topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the memcached topic, visit your repo's landing page and select "manage topics."


The MinSendBackupAfterMs is now set to 1 ms. Unfortunately this causes backup requests to be send when there is low load. It would be useful if we could set the minimum a little bit higher for certain endpoints.