mirror of
https://codeberg.org/anoncontributorxmr/monero.git
synced 2024-11-23 10:37:37 -07:00
2899379791
Daemons intended for public use can be set up to require payment in the form of hashes in exchange for RPC service. This enables public daemons to receive payment for their work over a large number of calls. This system behaves similarly to a pool, so payment takes the form of valid blocks every so often, yielding a large one off payment, rather than constant micropayments. This system can also be used by third parties as a "paywall" layer, where users of a service can pay for use by mining Monero to the service provider's address. An example of this for web site access is Primo, a Monero mining based website "paywall": https://github.com/selene-kovri/primo This has some advantages: - incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own - incentive to run your own node instead of using a third party's, thereby promoting decentralization - decentralized: payment is done between a client and server, with no third party needed - private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance - no payment occurs on the blockchain, so there is no extra transactional load - one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy) - no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do - Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue - no large credit balance maintained on servers, so they have no incentive to exit scam - you can use any/many node(s), since there's little cost in switching servers - market based prices: competition between servers to lower costs - incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others - increases network security - helps counteract mining pools' share of the network hash rate - zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner And some disadvantages: - low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine) - payment is "random", so a server might go a long time without a block before getting one - a public node's overall expected payment may be small Public nodes are expected to compete to find a suitable level for cost of service. The daemon can be set up this way to require payment for RPC services: monerod --rpc-payment-address 4xxxxxx \ --rpc-payment-credits 250 --rpc-payment-difficulty 1000 These values are an example only. The --rpc-payment-difficulty switch selects how hard each "share" should be, similar to a mining pool. The higher the difficulty, the fewer shares a client will find. The --rpc-payment-credits switch selects how many credits are awarded for each share a client finds. Considering both options, clients will be awarded credits/difficulty credits for every hash they calculate. For example, in the command line above, 0.25 credits per hash. A client mining at 100 H/s will therefore get an average of 25 credits per second. For reference, in the current implementation, a credit is enough to sync 20 blocks, so a 100 H/s client that's just starting to use Monero and uses this daemon will be able to sync 500 blocks per second. The wallet can be set to automatically mine if connected to a daemon which requires payment for RPC usage. It will try to keep a balance of 50000 credits, stopping mining when it's at this level, and starting again as credits are spent. With the example above, a new client will mine this much credits in about half an hour, and this target is enough to sync 500000 blocks (currently about a third of the monero blockchain). There are three new settings in the wallet: - credits-target: this is the amount of credits a wallet will try to reach before stopping mining. The default of 0 means 50000 credits. - auto-mine-for-rpc-payment-threshold: this controls the minimum credit rate which the wallet considers worth mining for. If the daemon credits less than this ratio, the wallet will consider mining to be not worth it. In the example above, the rate is 0.25 - persistent-rpc-client-id: if set, this allows the wallet to reuse a client id across runs. This means a public node can tell a wallet that's connecting is the same as one that connected previously, but allows a wallet to keep their credit balance from one run to the other. Since the wallet only mines to keep a small credit balance, this is not normally worth doing. However, someone may want to mine on a fast server, and use that credit balance on a low power device such as a phone. If left unset, a new client ID is generated at each wallet start, for privacy reasons. To mine and use a credit balance on two different devices, you can use the --rpc-client-secret-key switch. A wallet's client secret key can be found using the new rpc_payments command in the wallet. Note: anyone knowing your RPC client secret key is able to use your credit balance. The wallet has a few new commands too: - start_mining_for_rpc: start mining to acquire more credits, regardless of the auto mining settings - stop_mining_for_rpc: stop mining to acquire more credits - rpc_payments: display information about current credits with the currently selected daemon The node has an extra command: - rpc_payments: display information about clients and their balances The node will forget about any balance for clients which have been inactive for 6 months. Balances carry over on node restart.
521 lines
19 KiB
C++
521 lines
19 KiB
C++
// Copyright (c) 2014-2019, The Monero Project
|
|
//
|
|
// All rights reserved.
|
|
//
|
|
// Redistribution and use in source and binary forms, with or without modification, are
|
|
// permitted provided that the following conditions are met:
|
|
//
|
|
// 1. Redistributions of source code must retain the above copyright notice, this list of
|
|
// conditions and the following disclaimer.
|
|
//
|
|
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
|
|
// of conditions and the following disclaimer in the documentation and/or other
|
|
// materials provided with the distribution.
|
|
//
|
|
// 3. Neither the name of the copyright holder nor the names of its contributors may be
|
|
// used to endorse or promote products derived from this software without specific
|
|
// prior written permission.
|
|
//
|
|
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
|
|
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
|
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
|
|
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
|
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
|
|
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
|
|
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
//
|
|
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
|
|
|
|
#pragma once
|
|
|
|
#include <iosfwd>
|
|
#include <list>
|
|
#include <string>
|
|
#include <vector>
|
|
|
|
#include <boost/multi_index_container.hpp>
|
|
#include <boost/multi_index/ordered_index.hpp>
|
|
#include <boost/multi_index/identity.hpp>
|
|
#include <boost/multi_index/member.hpp>
|
|
#include <boost/optional/optional.hpp>
|
|
#include <boost/range/adaptor/reversed.hpp>
|
|
|
|
|
|
#include "cryptonote_config.h"
|
|
#include "net/enums.h"
|
|
#include "net/local_ip.h"
|
|
#include "p2p_protocol_defs.h"
|
|
#include "syncobj.h"
|
|
|
|
namespace nodetool
|
|
{
|
|
struct peerlist_types
|
|
{
|
|
std::vector<peerlist_entry> white;
|
|
std::vector<peerlist_entry> gray;
|
|
std::vector<anchor_peerlist_entry> anchor;
|
|
};
|
|
|
|
class peerlist_storage
|
|
{
|
|
public:
|
|
peerlist_storage()
|
|
: m_types{}
|
|
{}
|
|
|
|
//! \return Peers stored in stream `src` in `new_format` (portable archive or older non-portable).
|
|
static boost::optional<peerlist_storage> open(std::istream& src, const bool new_format);
|
|
|
|
//! \return Peers stored in file at `path`
|
|
static boost::optional<peerlist_storage> open(const std::string& path);
|
|
|
|
peerlist_storage(peerlist_storage&&) = default;
|
|
peerlist_storage(const peerlist_storage&) = delete;
|
|
|
|
~peerlist_storage() noexcept;
|
|
|
|
peerlist_storage& operator=(peerlist_storage&&) = default;
|
|
peerlist_storage& operator=(const peerlist_storage&) = delete;
|
|
|
|
//! Save peers from `this` and `other` in stream `dest`.
|
|
bool store(std::ostream& dest, const peerlist_types& other) const;
|
|
|
|
//! Save peers from `this` and `other` in one file at `path`.
|
|
bool store(const std::string& path, const peerlist_types& other) const;
|
|
|
|
//! \return Peers in `zone` and from remove from `this`.
|
|
peerlist_types take_zone(epee::net_utils::zone zone);
|
|
|
|
private:
|
|
peerlist_types m_types;
|
|
};
|
|
|
|
/************************************************************************/
|
|
/* */
|
|
/************************************************************************/
|
|
class peerlist_manager
|
|
{
|
|
public:
|
|
bool init(peerlist_types&& peers, bool allow_local_ip);
|
|
size_t get_white_peers_count(){CRITICAL_REGION_LOCAL(m_peerlist_lock); return m_peers_white.size();}
|
|
size_t get_gray_peers_count(){CRITICAL_REGION_LOCAL(m_peerlist_lock); return m_peers_gray.size();}
|
|
bool merge_peerlist(const std::vector<peerlist_entry>& outer_bs);
|
|
bool get_peerlist_head(std::vector<peerlist_entry>& bs_head, bool anonymize, uint32_t depth = P2P_DEFAULT_PEERS_IN_HANDSHAKE);
|
|
void get_peerlist(std::vector<peerlist_entry>& pl_gray, std::vector<peerlist_entry>& pl_white);
|
|
void get_peerlist(peerlist_types& peers);
|
|
bool get_white_peer_by_index(peerlist_entry& p, size_t i);
|
|
bool get_gray_peer_by_index(peerlist_entry& p, size_t i);
|
|
template<typename F> bool foreach(bool white, const F &f);
|
|
bool append_with_peer_white(const peerlist_entry& pr);
|
|
bool append_with_peer_gray(const peerlist_entry& pr);
|
|
bool append_with_peer_anchor(const anchor_peerlist_entry& ple);
|
|
bool set_peer_just_seen(peerid_type peer, const epee::net_utils::network_address& addr, uint32_t pruning_seed, uint16_t rpc_port, uint32_t rpc_credits_per_hash);
|
|
bool set_peer_unreachable(const peerlist_entry& pr);
|
|
bool is_host_allowed(const epee::net_utils::network_address &address);
|
|
bool get_random_gray_peer(peerlist_entry& pe);
|
|
bool remove_from_peer_gray(const peerlist_entry& pe);
|
|
bool get_and_empty_anchor_peerlist(std::vector<anchor_peerlist_entry>& apl);
|
|
bool remove_from_peer_anchor(const epee::net_utils::network_address& addr);
|
|
bool remove_from_peer_white(const peerlist_entry& pe);
|
|
|
|
private:
|
|
struct by_time{};
|
|
struct by_id{};
|
|
struct by_addr{};
|
|
|
|
struct modify_all_but_id
|
|
{
|
|
modify_all_but_id(const peerlist_entry& ple):m_ple(ple){}
|
|
void operator()(peerlist_entry& e)
|
|
{
|
|
e.id = m_ple.id;
|
|
}
|
|
private:
|
|
const peerlist_entry& m_ple;
|
|
};
|
|
|
|
struct modify_all
|
|
{
|
|
modify_all(const peerlist_entry& ple):m_ple(ple){}
|
|
void operator()(peerlist_entry& e)
|
|
{
|
|
e = m_ple;
|
|
}
|
|
private:
|
|
const peerlist_entry& m_ple;
|
|
};
|
|
|
|
struct modify_last_seen
|
|
{
|
|
modify_last_seen(time_t last_seen):m_last_seen(last_seen){}
|
|
void operator()(peerlist_entry& e)
|
|
{
|
|
e.last_seen = m_last_seen;
|
|
}
|
|
private:
|
|
time_t m_last_seen;
|
|
};
|
|
|
|
|
|
typedef boost::multi_index_container<
|
|
peerlist_entry,
|
|
boost::multi_index::indexed_by<
|
|
// access by peerlist_entry::net_adress
|
|
boost::multi_index::ordered_unique<boost::multi_index::tag<by_addr>, boost::multi_index::member<peerlist_entry,epee::net_utils::network_address,&peerlist_entry::adr> >,
|
|
// sort by peerlist_entry::last_seen<
|
|
boost::multi_index::ordered_non_unique<boost::multi_index::tag<by_time>, boost::multi_index::member<peerlist_entry,int64_t,&peerlist_entry::last_seen> >
|
|
>
|
|
> peers_indexed;
|
|
|
|
typedef boost::multi_index_container<
|
|
anchor_peerlist_entry,
|
|
boost::multi_index::indexed_by<
|
|
// access by anchor_peerlist_entry::net_adress
|
|
boost::multi_index::ordered_unique<boost::multi_index::tag<by_addr>, boost::multi_index::member<anchor_peerlist_entry,epee::net_utils::network_address,&anchor_peerlist_entry::adr> >,
|
|
// sort by anchor_peerlist_entry::first_seen
|
|
boost::multi_index::ordered_non_unique<boost::multi_index::tag<by_time>, boost::multi_index::member<anchor_peerlist_entry,int64_t,&anchor_peerlist_entry::first_seen> >
|
|
>
|
|
> anchor_peers_indexed;
|
|
|
|
private:
|
|
void trim_white_peerlist();
|
|
void trim_gray_peerlist();
|
|
|
|
friend class boost::serialization::access;
|
|
epee::critical_section m_peerlist_lock;
|
|
std::string m_config_folder;
|
|
bool m_allow_local_ip;
|
|
|
|
|
|
peers_indexed m_peers_gray;
|
|
peers_indexed m_peers_white;
|
|
anchor_peers_indexed m_peers_anchor;
|
|
};
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline void peerlist_manager::trim_gray_peerlist()
|
|
{
|
|
while(m_peers_gray.size() > P2P_LOCAL_GRAY_PEERLIST_LIMIT)
|
|
{
|
|
peers_indexed::index<by_time>::type& sorted_index=m_peers_gray.get<by_time>();
|
|
sorted_index.erase(sorted_index.begin());
|
|
}
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline void peerlist_manager::trim_white_peerlist()
|
|
{
|
|
while(m_peers_white.size() > P2P_LOCAL_WHITE_PEERLIST_LIMIT)
|
|
{
|
|
peers_indexed::index<by_time>::type& sorted_index=m_peers_white.get<by_time>();
|
|
sorted_index.erase(sorted_index.begin());
|
|
}
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::merge_peerlist(const std::vector<peerlist_entry>& outer_bs)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
for(const peerlist_entry& be: outer_bs)
|
|
{
|
|
append_with_peer_gray(be);
|
|
}
|
|
// delete extra elements
|
|
trim_gray_peerlist();
|
|
return true;
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::get_white_peer_by_index(peerlist_entry& p, size_t i)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
if(i >= m_peers_white.size())
|
|
return false;
|
|
|
|
peers_indexed::index<by_time>::type& by_time_index = m_peers_white.get<by_time>();
|
|
p = *epee::misc_utils::move_it_backward(--by_time_index.end(), i);
|
|
return true;
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::get_gray_peer_by_index(peerlist_entry& p, size_t i)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
if(i >= m_peers_gray.size())
|
|
return false;
|
|
|
|
peers_indexed::index<by_time>::type& by_time_index = m_peers_gray.get<by_time>();
|
|
p = *epee::misc_utils::move_it_backward(--by_time_index.end(), i);
|
|
return true;
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::is_host_allowed(const epee::net_utils::network_address &address)
|
|
{
|
|
//never allow loopback ip
|
|
if(address.is_loopback())
|
|
return false;
|
|
|
|
if(!m_allow_local_ip && address.is_local())
|
|
return false;
|
|
|
|
return true;
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::get_peerlist_head(std::vector<peerlist_entry>& bs_head, bool anonymize, uint32_t depth)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
peers_indexed::index<by_time>::type& by_time_index=m_peers_white.get<by_time>();
|
|
uint32_t cnt = 0;
|
|
|
|
// picks a random set of peers within the first 120%, rather than a set of the first 100%.
|
|
// The intent is that if someone asks twice, they can't easily tell:
|
|
// - this address was not in the first list, but is in the second, so the only way this can be
|
|
// is if its last_seen was recently reset, so this means the target node recently had a new
|
|
// connection to that address
|
|
// - this address was in the first list, and not in the second, which means either the address
|
|
// was moved to the gray list (if it's not accessibe, which the attacker can check if
|
|
// the address accepts incoming connections) or it was the oldest to still fit in the 250 items,
|
|
// so its last_seen is old.
|
|
//
|
|
// See Cao, Tong et al. "Exploring the Monero Peer-to-Peer Network". https://eprint.iacr.org/2019/411
|
|
//
|
|
const uint32_t pick_depth = anonymize ? depth + depth / 5 : depth;
|
|
bs_head.reserve(pick_depth);
|
|
for(const peers_indexed::value_type& vl: boost::adaptors::reverse(by_time_index))
|
|
{
|
|
if(cnt++ >= pick_depth)
|
|
break;
|
|
|
|
bs_head.push_back(vl);
|
|
}
|
|
|
|
if (anonymize)
|
|
{
|
|
std::shuffle(bs_head.begin(), bs_head.end(), crypto::random_device{});
|
|
if (bs_head.size() > depth)
|
|
bs_head.resize(depth);
|
|
for (auto &e: bs_head)
|
|
e.last_seen = 0;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
template<typename F> inline
|
|
bool peerlist_manager::foreach(bool white, const F &f)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
peers_indexed::index<by_time>::type& by_time_index = white ? m_peers_white.get<by_time>() : m_peers_gray.get<by_time>();
|
|
for(const peers_indexed::value_type& vl: boost::adaptors::reverse(by_time_index))
|
|
if (!f(vl))
|
|
return false;
|
|
return true;
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::set_peer_just_seen(peerid_type peer, const epee::net_utils::network_address& addr, uint32_t pruning_seed, uint16_t rpc_port, uint32_t rpc_credits_per_hash)
|
|
{
|
|
TRY_ENTRY();
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
//find in white list
|
|
peerlist_entry ple;
|
|
ple.adr = addr;
|
|
ple.id = peer;
|
|
ple.last_seen = time(NULL);
|
|
ple.pruning_seed = pruning_seed;
|
|
ple.rpc_port = rpc_port;
|
|
ple.rpc_credits_per_hash = rpc_credits_per_hash;
|
|
return append_with_peer_white(ple);
|
|
CATCH_ENTRY_L0("peerlist_manager::set_peer_just_seen()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::append_with_peer_white(const peerlist_entry& ple)
|
|
{
|
|
TRY_ENTRY();
|
|
if(!is_host_allowed(ple.adr))
|
|
return true;
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
//find in white list
|
|
auto by_addr_it_wt = m_peers_white.get<by_addr>().find(ple.adr);
|
|
if(by_addr_it_wt == m_peers_white.get<by_addr>().end())
|
|
{
|
|
//put new record into white list
|
|
m_peers_white.insert(ple);
|
|
trim_white_peerlist();
|
|
}else
|
|
{
|
|
//update record in white list
|
|
peerlist_entry new_ple = ple;
|
|
if (by_addr_it_wt->pruning_seed && ple.pruning_seed == 0) // guard against older nodes not passing pruning info around
|
|
new_ple.pruning_seed = by_addr_it_wt->pruning_seed;
|
|
if (by_addr_it_wt->rpc_port && ple.rpc_port == 0) // guard against older nodes not passing RPC port around
|
|
new_ple.rpc_port = by_addr_it_wt->rpc_port;
|
|
new_ple.last_seen = by_addr_it_wt->last_seen; // do not overwrite the last seen timestamp, incoming peer list are untrusted
|
|
m_peers_white.replace(by_addr_it_wt, new_ple);
|
|
}
|
|
//remove from gray list, if need
|
|
auto by_addr_it_gr = m_peers_gray.get<by_addr>().find(ple.adr);
|
|
if(by_addr_it_gr != m_peers_gray.get<by_addr>().end())
|
|
{
|
|
m_peers_gray.erase(by_addr_it_gr);
|
|
}
|
|
return true;
|
|
CATCH_ENTRY_L0("peerlist_manager::append_with_peer_white()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::append_with_peer_gray(const peerlist_entry& ple)
|
|
{
|
|
TRY_ENTRY();
|
|
if(!is_host_allowed(ple.adr))
|
|
return true;
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
//find in white list
|
|
auto by_addr_it_wt = m_peers_white.get<by_addr>().find(ple.adr);
|
|
if(by_addr_it_wt != m_peers_white.get<by_addr>().end())
|
|
return true;
|
|
|
|
//update gray list
|
|
auto by_addr_it_gr = m_peers_gray.get<by_addr>().find(ple.adr);
|
|
if(by_addr_it_gr == m_peers_gray.get<by_addr>().end())
|
|
{
|
|
//put new record into white list
|
|
m_peers_gray.insert(ple);
|
|
trim_gray_peerlist();
|
|
}else
|
|
{
|
|
//update record in gray list
|
|
peerlist_entry new_ple = ple;
|
|
if (by_addr_it_gr->pruning_seed && ple.pruning_seed == 0) // guard against older nodes not passing pruning info around
|
|
new_ple.pruning_seed = by_addr_it_gr->pruning_seed;
|
|
if (by_addr_it_gr->rpc_port && ple.rpc_port == 0) // guard against older nodes not passing RPC port around
|
|
new_ple.rpc_port = by_addr_it_gr->rpc_port;
|
|
new_ple.last_seen = by_addr_it_gr->last_seen; // do not overwrite the last seen timestamp, incoming peer list are untrusted
|
|
m_peers_gray.replace(by_addr_it_gr, new_ple);
|
|
}
|
|
return true;
|
|
CATCH_ENTRY_L0("peerlist_manager::append_with_peer_gray()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::append_with_peer_anchor(const anchor_peerlist_entry& ple)
|
|
{
|
|
TRY_ENTRY();
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
|
|
auto by_addr_it_anchor = m_peers_anchor.get<by_addr>().find(ple.adr);
|
|
|
|
if(by_addr_it_anchor == m_peers_anchor.get<by_addr>().end()) {
|
|
m_peers_anchor.insert(ple);
|
|
}
|
|
|
|
return true;
|
|
|
|
CATCH_ENTRY_L0("peerlist_manager::append_with_peer_anchor()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::get_random_gray_peer(peerlist_entry& pe)
|
|
{
|
|
TRY_ENTRY();
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
|
|
if (m_peers_gray.empty()) {
|
|
return false;
|
|
}
|
|
|
|
size_t random_index = crypto::rand_idx(m_peers_gray.size());
|
|
|
|
peers_indexed::index<by_time>::type& by_time_index = m_peers_gray.get<by_time>();
|
|
pe = *epee::misc_utils::move_it_backward(--by_time_index.end(), random_index);
|
|
|
|
return true;
|
|
|
|
CATCH_ENTRY_L0("peerlist_manager::get_random_gray_peer()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::remove_from_peer_white(const peerlist_entry& pe)
|
|
{
|
|
TRY_ENTRY();
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
|
|
peers_indexed::index_iterator<by_addr>::type iterator = m_peers_white.get<by_addr>().find(pe.adr);
|
|
|
|
if (iterator != m_peers_white.get<by_addr>().end()) {
|
|
m_peers_white.erase(iterator);
|
|
}
|
|
|
|
return true;
|
|
|
|
CATCH_ENTRY_L0("peerlist_manager::remove_from_peer_white()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::remove_from_peer_gray(const peerlist_entry& pe)
|
|
{
|
|
TRY_ENTRY();
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
|
|
peers_indexed::index_iterator<by_addr>::type iterator = m_peers_gray.get<by_addr>().find(pe.adr);
|
|
|
|
if (iterator != m_peers_gray.get<by_addr>().end()) {
|
|
m_peers_gray.erase(iterator);
|
|
}
|
|
|
|
return true;
|
|
|
|
CATCH_ENTRY_L0("peerlist_manager::remove_from_peer_gray()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::get_and_empty_anchor_peerlist(std::vector<anchor_peerlist_entry>& apl)
|
|
{
|
|
TRY_ENTRY();
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
|
|
auto begin = m_peers_anchor.get<by_time>().begin();
|
|
auto end = m_peers_anchor.get<by_time>().end();
|
|
|
|
std::for_each(begin, end, [&apl](const anchor_peerlist_entry &a) {
|
|
apl.push_back(a);
|
|
});
|
|
|
|
m_peers_anchor.get<by_time>().clear();
|
|
|
|
return true;
|
|
|
|
CATCH_ENTRY_L0("peerlist_manager::get_and_empty_anchor_peerlist()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
inline
|
|
bool peerlist_manager::remove_from_peer_anchor(const epee::net_utils::network_address& addr)
|
|
{
|
|
TRY_ENTRY();
|
|
|
|
CRITICAL_REGION_LOCAL(m_peerlist_lock);
|
|
|
|
anchor_peers_indexed::index_iterator<by_addr>::type iterator = m_peers_anchor.get<by_addr>().find(addr);
|
|
|
|
if (iterator != m_peers_anchor.get<by_addr>().end()) {
|
|
m_peers_anchor.erase(iterator);
|
|
}
|
|
|
|
return true;
|
|
|
|
CATCH_ENTRY_L0("peerlist_manager::remove_from_peer_anchor()", false);
|
|
}
|
|
//--------------------------------------------------------------------------------------------------
|
|
}
|
|
|