btcpool礦池原始碼分析(10)-StratumServer模組解析
# btcpool礦池-StratumServer模組解析
## 核心機制總結
* 接收的job延遲超過60秒將丟棄
* 如果job中prevHash與本地job中prevHash不同,即為已產生新塊,job中isClean狀態將置為true
* true即要求礦機立即切換job
* 三種情況下將向礦機下發新job:
* 收到新高度的job
* 過去一個job為新高度且為空塊job,且最新job為非空塊job
* 達到預定的時間間隔30秒
* 最近一次下發job的時間將寫入檔案(由file_last_notify_time指定)
* 本地job有效期為300秒
* 每10秒拉取一次新使用者列表(由list_id_api_url指定),使用者寫入本地map中
* sserver最大可用SessionId數為16777214
* btcpool支援BtcAgent擴充套件協議和Stratum協議,使用magic_number(0x7F)區分
* 處理Stratum協議:
* suggest_target與suggest_difficulty等價,用於設定初始挖礦難度,需在subscribe之前請求
* 使用sessionID作為extraNonce1_以確保礦機任務不重複
* authorize之前有15秒讀超時,authorize之後有10分鐘讀超時,10分鐘無提交將斷開連線
* 初始難度為16384,或從suggest_difficulty指定,下次將一次調整到位保持10s提交share
* 每個session會維護一個localJobs_佇列,佇列長度為10條
* share被拒絕的幾種情況:
* JOB_NOT_FOUND,localJobs_佇列中該job已被擠出
* DUPLICATE_SHARE,share已提交過,已提交的share會計入submitShares_
* JOB_NOT_FOUND,jobRepository_中job不存在,即job已過期(300秒過期時間)
* JOB_NOT_FOUND,jobRepository_中job狀態為Stale,即job是舊的非新job
* TIME_TOO_OLD,share中提交的nTime小於job規定的minTime
* TIME_TOO_OLD,share中提交的nTime比當前時間大10分鐘
* LOW_DIFFICULTY,share中提交的hash不滿足難度目標
* 處理BtcAgent擴充套件協議:
* Agent下礦機預設難度也為16384
* 使用Agent sessionID作為extraNonce2_前半部分,以確保Agent下礦機任務不重複
* 礦池下發新任務時,如session為BtcAgent,將為Agent下所有礦機計算難度
* 如難度發生變更,將按難度不同,分別構造多條CMD_MINING_SET_DIFF指令一併下發處理
## StratumServer命令使用
```shell
sserver -c sserver.cfg -l log_dir
#-c指定sserver配置檔案
#-l指定日誌目錄
```
## sserver.cfg配置檔案
```shell
//是否使用testnet
testnet = true;
//kafka叢集
kafka = {
brokers = "1.1.1.1:9092,2.2.2.2:9092,3.3.3.3:9092";
};
//sserver配置
sserver = {
//IP和埠
ip = "0.0.0.0";
port = 3333;
//server id,全域性唯一,取值範圍[1, 255]
id = 1;
//最近一次挖礦通知時間寫入檔案,用於監控
file_last_notify_time = "/work/xxx/sserver_lastnotifytime.txt";
//如果啟用模擬器,所有share均被接受,用於測試
enable_simulator = false;
//如果啟用,所有share都將成塊並被提交,用於測試
enable_submit_invalid_block = false;
//兩次share提交的間隔時間
share_avg_seconds = 10;
};
users = {
//使用者列表api
list_id_api_url = "https://example.com/get_user_id_list";
};
```
## StratumServer流程圖
## SOLVED_SHARE訊息
```c++
if (isSubmitInvalidBlock_ == true || bnBlockHash <= bnNetworkTarget) {
//
// build found block
//
FoundBlock foundBlock;
foundBlock.jobId_ = share.jobId_;
foundBlock.workerId_ = share.workerHashId_;
foundBlock.userId_ = share.userId_;
foundBlock.height_ = sjob->height_;
memcpy(foundBlock.header80_, (const uint8_t *)&header, sizeof(CBlockHeader));
snprintf(foundBlock.workerFullName_, sizeof(foundBlock.workerFullName_),
"%s", workFullName.c_str());
// send
sendSolvedShare2Kafka(&foundBlock, coinbaseBin);
// mark jobs as stale
jobRepository_->markAllJobsAsStale();
LOG(INFO) << ">>>> found a new block: " << blkHash.ToString()
<< ", jobId: " << share.jobId_ << ", userId: " << share.userId_
<< ", by: " << workFullName << " <<<<";
}
```
## 計算挖礦難度
```c++
//建構函式
//kMinDiff_為最小難度,static const uint64 kMinDiff_ = 64;
//kMaxDiff_為最大難度,static const uint64 kMaxDiff_ = 4611686018427387904ull;
//kDefaultDiff_為預設初始難度,static const uint64 kDefaultDiff_ = 16384;
//kDiffWindow_為N個share的時間視窗,static const time_t kDiffWindow_ = 900;
//kRecordSeconds_為1個share的時間,static const time_t kRecordSeconds_ = 10;
//sharesNum_和shares_初始值均為90
DiffController(const int32_t shareAvgSeconds) :
startTime_(0),
minDiff_(kMinDiff_), curDiff_(kDefaultDiff_), curHashRateLevel_(0),
sharesNum_(kDiffWindow_/kRecordSeconds_), /* every N seconds as a record */
shares_ (kDiffWindow_/kRecordSeconds_)
{
if (shareAvgSeconds >= 1 && shareAvgSeconds <= 60) {
shareAvgSeconds_ = shareAvgSeconds;
} else {
shareAvgSeconds_ = 8;
}
}
//程式碼btcpool/src/StratumSession.h
//計算挖礦難度
//不低於最小難度64
uint64 DiffController::calcCurDiff() {
uint64 diff = _calcCurDiff();
if (diff < minDiff_) {
diff = minDiff_;
}
return diff;
}
uint64 DiffController::_calcCurDiff() {
const time_t now = time(nullptr);
const int64 k = now / kRecordSeconds_;
const double sharesCount = (double)sharesNum_.sum(k);
if (startTime_ == 0) { // first time, we set the start time
startTime_ = time(nullptr);
}
const double kRateHigh = 1.40;
const double kRateLow = 0.40;
//時間視窗(900秒)內預期的share數
double expectedCount = round(kDiffWindow_ / (double)shareAvgSeconds_);
//return now >= startTime_ + kDiffWindow_;
if (isFullWindow(now)) { /* have a full window now */
// big miner have big expected share count to make it looks more smooth.
expectedCount *= minerCoefficient(now, k);
}
if (expectedCount > kDiffWindow_) {
//最多1秒提交1個,預期share數最大為900
expectedCount = kDiffWindow_; // one second per share is enough
}
// this is for very low hashrate miner, eg. USB miners
// should received at least one share every 60 seconds
//非完整時間視窗、且時間已超過60s、提交的share數小於(60秒1個)、當前難度大於或等於2倍最小難度
//此時降低難度為之前1/2
if (!isFullWindow(now) && now >= startTime_ + 60 &&
sharesCount <= (int32_t)((now - startTime_)/60.0) &&
curDiff_ >= minDiff_*2) {
setCurDiff(curDiff_ / 2);
sharesNum_.mapMultiply(2.0);
return curDiff_;
}
// too fast
//如果提交share數超過預期數的1.4倍時
if (sharesCount > expectedCount * kRateHigh) {
//如果share數大於預期share數,且當前難度<最大難度時,提升難度為原來2倍
while (sharesNum_.sum(k) > expectedCount &&
curDiff_ < kMaxDiff_) {
setCurDiff(curDiff_ * 2);
sharesNum_.mapDivide(2.0); //share數/2
}
return curDiff_;
}
// too slow
//如果是完整時間視窗,且當前難度大於或等於2被最小難度
if (isFullWindow(now) && curDiff_ >= minDiff_*2) {
//如果share數低於預期數的0.4,且當前難度大於或等於2倍最小難度,降低難度為原來的1/2
while (sharesNum_.sum(k) < expectedCount * kRateLow &&
curDiff_ >= minDiff_*2) {
setCurDiff(curDiff_ / 2);
sharesNum_.mapMultiply(2.0); //share數乘2
}
assert(curDiff_ >= minDiff_);
return curDiff_;
}
return curDiff_;
}
```
## sserver校驗share的機制
```shell
//本地job列表localJobs_最多保留最近10條任務,如有新任務,將擠出1條老任務。如果share所對應的job未在本地列表中,將StratumError::JOB_NOT_FOUND
//本地share列表submitShares_中,如果已有本條share,即重複,將StratumError::DUPLICATE_SHARE
//校驗share不通過
// job列表exJobs_沒有找到job,exJobs_中job有300秒過期時間,過期將刪除,報StratumError::JOB_NOT_FOUND
// share中nTime小於job的最小時間,過老,報StratumError::TIME_TOO_OLD
// share中nTime超過job中的nTime 10分鐘,過新,報StratumError::TIME_TOO_NEW
// 區塊雜湊>job難度目標,不合格,報StratumError::LOW_DIFFICULTY
```
## sserver下發新job的機制
1、如果收到新高度statum job,將立即下發新job
```c++
bool isClean = false;
if (latestPrevBlockHash_ != sjob->prevHash_) {
isClean = true;
latestPrevBlockHash_ = sjob->prevHash_;
LOG(INFO) << "received new height statum job, height: " << sjob->height_
<< ", prevhash: " << sjob->prevHash_.ToString();
}
shared_ptr<StratumJobEx> exJob = std::make_shared<StratumJobEx>(sjob, isClean);
{
ScopeLock sl(lock_);
if (isClean) {
// mark all jobs as stale, should do this before insert new job
for (auto it : exJobs_) {
it.second->markStale();
}
}
// insert new job
exJobs_[sjob->jobId_] = exJob;
}
if (isClean) {
sendMiningNotify(exJob);
return;
}
```
2、如果過去一個job為新高度且為空塊job,並且最新job非空塊job,將盡快下發新job
```
if (isClean == false && exJobs_.size() >= 2) {
auto itr = exJobs_.rbegin();
shared_ptr<StratumJobEx> exJob1 = itr->second;
itr++;
shared_ptr<StratumJobEx> exJob2 = itr->second;
if (exJob2->isClean_ == true &&
exJob2->sjob_->merkleBranch_.size() == 0 &&
exJob1->sjob_->merkleBranch_.size() != 0) {
sendMiningNotify(exJob);
}
}
```
3、每超過一定時間間隔(30秒),將下發新job
```
void JobRepository::checkAndSendMiningNotify() {
// last job is 'expried', send a new one
if (exJobs_.size() &&
lastJobSendTime_ + kMiningNotifyInterval_ <= time(nullptr))
{
shared_ptr<StratumJobEx> exJob = exJobs_.rbegin()->second;
sendMiningNotify(exJob);
}
}
JobRepository::JobRepository(const char *kafkaBrokers,
const string &fileLastNotifyTime,
Server *server):
running_(true),
kafkaConsumer_(kafkaBrokers, KAFKA_TOPIC_STRATUM_JOB, 0/*patition*/),
server_(server), fileLastNotifyTime_(fileLastNotifyTime),
kMaxJobsLifeTime_(300),
kMiningNotifyInterval_(30), // TODO: make as config arg
lastJobSendTime_(0)
{
assert(kMiningNotifyInterval_ < kMaxJobsLifeTime_);
}
```
## 參考文件
* [BtcAgent](https://github.com/btccom/btcagent)
* [BtcAgent通訊協議](https://github.com/btccom/btcpool/blob/master/docs/AGENT.md)
網址:http://www.qukuailianxueyuan.io/
欲領取造幣技術與全套虛擬機器資料
區塊鏈技術交流QQ群:756146052 備註:CSDN
尹成學院微信:備註:CSDN
相關文章
- btcpool礦池原始碼分析(3)-BlockMaker模組解析TCP原始碼BloC
- btcpool礦池原始碼分析(4)-GbtMaker模組解析TCP原始碼
- btcpool礦池原始碼分析(5)-JobMaker模組解析TCP原始碼
- btcpool礦池原始碼分析(6)-nmcauxmaker模組解析TCP原始碼UX
- btcpool礦池原始碼分析(6)-PoolWatcher模組解析TCP原始碼
- btcpool礦池原始碼分析(7)-sharelogger模組解析TCP原始碼
- btcpool礦池原始碼分析(9)-statshttpd模組解析TCP原始碼httpd
- btcpool礦池原始碼分析(8)-slparserTCP原始碼
- btcpool礦池原始碼分析(1)環境搭建TCP原始碼
- btcpool礦池原始碼分析(2)-核心機制總結及優化思考TCP原始碼優化
- open-ethereum-pool以太坊礦池原始碼分析(3)payouts模組原始碼
- open-ethereum-pool以太坊礦池原始碼分析(4)-policy模組原始碼
- open-ethereum-pool以太坊礦池原始碼分析(5)proxy模組原始碼
- open-ethereum-pool以太坊礦池原始碼分析(6)-redis模組原始碼Redis
- open-ethereum-pool以太坊礦池原始碼分析(7)unlocker模組原始碼
- Django(49)drf解析模組原始碼分析Django原始碼
- (一) Mybatis原始碼分析-解析器模組MyBatis原始碼
- Swoole 原始碼分析——記憶體模組之記憶體池原始碼記憶體
- go-ethereum原始碼解析-miner挖礦部分原始碼分析CPU挖礦Go原始碼
- webpack核心模組tapable原始碼解析Web原始碼
- open-ethereum-pool以太坊礦池原始碼分析(2)API分析原始碼API
- python2 traceback模組原始碼解析Python原始碼
- QT Widgets模組原始碼解析與技巧QT原始碼
- SOFARegistry 原始碼|資料同步模組解析原始碼
- mybaits原始碼分析--binding模組(五)AI原始碼
- open-ethereum-pool以太坊礦池原始碼分析(1)-main入口分析原始碼AI
- QT Widgets模組原始碼解析與實踐QT原始碼
- 從原始碼分析Node的Cluster模組原始碼
- Swoole 原始碼分析——Reactor 模組之 ReactorEpoll原始碼React
- Swoole 原始碼分析——Client模組之Send原始碼client
- Swoole 原始碼分析——Client模組之Connect原始碼client
- Swoole 原始碼分析——Client模組之Recv原始碼client
- JavaScript 模組化及 SeaJs 原始碼分析JavaScriptJS原始碼
- mybaits原始碼分析--快取模組(六)AI原始碼快取
- mybaits原始碼分析--日誌模組(四)AI原始碼
- Django(51)drf渲染模組原始碼分析Django原始碼
- 以太坊原始碼分析(42)miner挖礦部分原始碼分析CPU挖礦原始碼
- Swoole 原始碼分析——Server 模組之 OpenSSL (下)原始碼Server