Today we officially tagged the latest version of the Hive API node stack (v1.27.11).
We’ve been running various beta versions of the new stack for quite a while on api.hive.blog, and its been easy to see that it performs much better than the old stack running on most of the other Hive API nodes. The release version of the API is now accessible on https://api.syncad.com and we’ll switch to the release version of the stack on api.hive.blog tomorrow.
With the official release of the new stack, I expect most of the other nodes will be updating to it within the next week or so, and we should see higher performance across the ecosystem.
Here’s a quick summary on some of what the BlockTrades team has been working on since my last report. As usual, it’s not a complete list. Red items are links in gitlab to the actual work done.
Upgrading everything to build/run on Ubuntu 24
One of the main changes we made across the entire set of apps was to update our build/deployment environment from Ubuntu 22 to Ubuntu 24. This required more work than expected as this also involved an upgrade to a new version of Python, which is heavily used in our testing system and also by several of our development tools, requiring changes to that python code.
Hived: blockchain node software
We improved the snapshot/replay processing so that you can first resume from a snapshot, then replay any additional blocks you have in your current block log, instead of requiring you your node to re-sync those blocks.
Optimizations In progress
We’re currently finishing up a few long-planned performance improvements: 1) a fixed-size block memory allocator (https://gitlab.syncad.com/hive/hive/-/merge_requests/1525) and 2) a moving comment objects from memory into a rocksdb database.
While benchmarking the fixed-block allocator, we saw a 18% speedup in in-memory replay time and a much bigger speedup (23%) for disk-based replays. The new allocator also reduces memory usage by 1665 MB.
Moving comment objects from memory also drastically reduced the size of the statefile, which will make it easy for systems with relatively low amounts of memory to do “in-memory” replays. I’ll provide more details on this later after I’ve personally benchmarked the new code, but everything I’ve heard so far sounds quite impressive.
Upcoming work: enhancing transaction signing
We still plan an overhaul of the transaction signing system in hived. These changes will be included as part of the next hardfork and they are also tightly related to the support for “Lite Accounts” using a HAF API (my plan is to offer a similar signing feature set across the two systems to keep things simple). So the Lite Account API will probably be rolled out on a similar time frame.
HAF: framework for creating new Hive APIs and apps
- We now store the state history of a HAF node inside its database (e.g. when it switches between massive sync, live sync, etc): https://gitlab.syncad.com/hive/haf/-/merge_requests/595
- Documentation updates: https://gitlab.syncad.com/hive/haf/-/merge_requests/629 https://gitlab.syncad.com/hive/haf/-/merge_requests/630
Upcoming work: lightweight HAF servers
Plans to support an alternate “lightweight” version of HAF with pruned block data: https://gitlab.syncad.com/hive/haf/-/issues/277
HAfAH: account history API
We switched to using structured parameter/return value types for the new REST API. Only a few apps currently use the new API, and these apps (e.g. Denser and the HAF block explorer UI) have been upgraded to use the newer version of the API.
Hivemind: social media API
There was a fix to hivemind’s healthcheck due to an update to HAF’s schema: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/867
Optimized notification cache processing to reduce block processing time
We optimized notification cache processing. Previously this code had to process the last 90 days worth of blocks (e.g. around 3m blocks) to generate notifications for users, so it consumed the vast majority of the time that hivemind needed to update on each block during live sync (it took around 500ms on a fast system). Now we incrementally update this cache on a block-by-block basis and it is 100x faster (around 5ms). Total time now for the hivemind indexer to process a block is down to a comfortable 50ms. There are a few minor issues with the new code, which we’ll resolve in a later release.
Redesigned follow-style tables to be more efficient
We also redesigned the tables and queries for managing follows, mutes, blacklists, etc. This not only reduced storage requirements, but more importantly, allows for the performance of queries related to these tables to scale well over time. In particular, functions that need to skip "muted" information should be much faster.
These changes were done in https://gitlab.syncad.com/hive/hivemind/-/merge_requests/863 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/869 https://gitlab.syncad.com/hive/hivemind/-/merge_requests/872
Configurable timeout for long API calls
Hivemind also now has a configurable “timeout” on API calls that API node operators can set to auto-kill some pathological queries that might unnecessarily load their server. By default it is set to 5s which should be appropriate for most current servers I think. Very fast servers may consider lowering this value and very slow servers may want to increase it.
Balance tracker API: tracks token balance histories for accounts
- Similar to HAFAH, the REST API was modified to support structured parameter and return types.
- We added daily, monthly, and yearly aggregation data for coin balances.
- Further speed ups to sync time.
- Added limits for APIs taking a page size
- Added support for delegation processing
- Track savings balance
- New API for recurrent transfers
Reputation tracker: API for fetching account reputation
- Fixed a problem with upgrading reputation_tracker on a haf server: https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/96
- Rewrote reputation calculation algorithm to get a big speedup: https://gitlab.syncad.com/hive/reputation_tracker/-/merge_requests/89
HAF Block Explorer
- Rewrote the algorithm to speed it up, which also eliminated the need for periodic full vacuums of the tables.
- Rewrote permlink search: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/301
- Added an index for filtering by post in permlink search: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/303
- New block API: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/294
- Unify API return types to use structured types: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/298
- Unify types in hafbe endpoints: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/300
- Add max limits to various endpoints: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/287
- Add blocks index on hash column: https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/282
HAF API Node (Scripts for deploying and managing an API node)
For anyone who wants to run a Hive API node server, this is the place to start. This repo contains scripts for managing the required services using docker compose.
- Fixes to assisted startup script: https://gitlab.syncad.com/hive/haf_api_node/-/merge_requests/89
- Various fixes to healthchecks
- There is a separate “hivemind_user” role that is used to allow for timeout of API-based queries (as opposed to indexing queries). As mentioned in the hivemind section, this timeout defaults to 5s.
- We now use “haf” prefix by default instead of “haf-world” to shorten container names.
- Tempfiles under 200 bytes aren’t logged to reduce log spam
- More database tuning settings were made based on analysis using pg_gather. In particular, work_mem was reduced from 1024MB to 64MB, which should reduce the chance for an OOM condition on a heavily loaded server.
WAX API library for Hive apps
- Support for external signature providers (like Keychain) in transaction creation process supported by Wax
- Eliminated issues reported by dependabot service specific to dependency and code based vulnerabilities
- Implemented support for MetaMask as signature provider extension. We are waiting for security audit verification (of dedicated MetaMask snap implementation supporting Hive integration) to make Hive officially supported by MetaMask. Also Hive has been included in https://github.com/satoshilabs/slips/blob/master/slip-0044.md
- Improving error information available to applications when API calls fail. First step is mostly done in Hived repo: generation of constants representing specific FC_ASSERT instances. After that, exception classes in WAX will wrap the most common error cases and then expose them to Python/TS to simplify error processing (currently complex regexp parsing is required on the client side to detect some types of errors).
- Improvements to support workerbee better (bot library).
- First working version of Python implementation. API support is still in progress, but we expect to have our first prototype for automating generation of the API call definitions for Python from the swagger.json file (same as it currently works for TypeScript) by next week.
Hive Wallet MetaMask Snap
This is a hive wallet extension allowing you to sign transactions using keys derived from your MetaMask wallet.
We are currently preparing the library for an official security audit by improving project documentation and fixing issues after an internal review.
Hive Bridge
To make joining Hive more smoothly, we created a Hive Bridge service providing basic features such as signing, encrypting (can be used in bot authentication flows where given user need to confirm its authority by encrypting some provided buffer). The service is available at: https://auth.openhive.network
WorkerBee
This is a typescript library for automated Hive-related tasks (e.g. for writing bots that process incoming blocks). We recently made performance optimizations to workerbee to support large scale block processing scenarios where lots of previous blocks need to be fetched. Recent work has speed this up to 3x. We’re continuing to work on further optimizations to eliminate bottlenecks.
Generic Healthchecker UI component
We officially released the healthchecker UI component. This component can be installed into a Hive web app to allow a user to monitor the performance of API nodes available to them and control which API node is used. The HAF block explorer UI was also updated to use this component.
Denser
- Support for communities
- Support for keychain signing
- Beta version available for testing at https://blog.openhive.network
HiveSense: Semantic-based post searching using vector embeddings
HiveSense is a brand new HAF app to optionally allow for semantic searching of Hivemind data (i.e. Hive posts). This should solve a long-standing problem where it has been difficult for users to find older content, so it should be a very nice improvement to the ecosystem.
It works by running deep learning algorithms to generate vector embeddings for Hive posts. These can then be searched to identify posts that are semantically related (related by meaning rather by than by exactly matching words) to a user’s search term.
This project is still undergoing development and testing right now, but the code is available for public experimentation. The new repo is here: https://gitlab.syncad.com/hive/hivesense
Comments (27)
Great set of updates!! Obvious big fan of Denser and Wax.
Some questions:
a) For Hive Bridge, I see that it is not detecting Metamask extension. But rather it is calling for Metamask Flask in "Requesting Account Creation". Is there a documentation that we can look at for Hive Bridge or is it part of Metamask snap?
b) Is the HealthChecker UI used in any site somewhere?
Checking the other projects. Well done to the core dev team.
I'll ask about Hive Bridge, I'm not involved in that project directly, so don't know details.
The healthchecker UI is embedded into the latest haf block explorer. You can see it by going to https://testexplore.openhive.network/ and clicking on the API node links at the bottom of the page to bring up the node page.
Regarding metamask flask, I was told: I think it's because we are still not official and for that third party audit is required which is scheduled soon
yes, our team is going through verification process of our mentioned Meta Mask snap. Until it, we can't use official MetaMask distribution channel.
We benchmarked the new stack on a AMD 9950X3D with two 4TB T700 nvme drives and 64GB of RAM. HAF itself, using a RAM disk temporarily for the state_file replays in 13.34h. Next, replaying the apps in parallel takes 49h39m (block explorer takes 7h38m, reputation tracker takes 11h20m, and hivemind takes 49h39m). So total time to replay on this quite fast system is 13.34h+49.65=63h = 2days 15hours.
Been following Hive for a while now and this release feels like a huge leap forward. Thanks for keeping us updated with such detailed reports, makes techies like me feel part of the process!
https://www.youtube.com/watch?v=0Friu3Thk5Y
Awesome to see these come to light!
That last one I know is going to make life easier for many users and myself who always think of a post but give up even trying to search for it.
Adding my 2c to hived's
Optimizations In progress
.First one - pool allocators - in principle does not reduce memory usage (at least not at current stage). In fact it increases it somewhat, however due to less fragmentation actual usable memory increases, hence less allocation is reported. It is the same effect we could already get by loading snapshot, except now the benefit persists instead of degrading over time. I'm pretty curious about the performance increase, especially in combination with just finished undo session optimizations. As far as I know pool allocators are not yet applied to objects created for undo session, so there is still room for improvement (might even show up in performance of regular sync, not just in extreme traffic in live sync and in colony+queen cycle). Also we have detailed data on average amount of objects needed at any given time which might allow some tweaking of pool sizes for each individual index, which might also slightly improve overall performance still. Once the MR is included in
develop
further task opens up, but Marek wanted to first get the estimation of how much actual memory consumption reduction we can gain with it, especially considering that the biggest item is getting much smaller with second mentioned optimization - comment archive.As for "comment archive" optimization, drastic reduction of memory usage is obviously expected, but the first benchmarks look too good to be true. Even the extreme pessimistic scenarios are faster than current version, with most of comment accesses to archive in RocksDB instead of fresh comments inside memory multiindex (measured on big blocks with over 14k votes in each block). It is not entirely impossible, because RocksDB might be algorithmically faster even though each access to data is potentially much slower, but that would mean it makes sense to take a deep look into type of multiindex we are using, and not just for comments - more opportunities for optimizations :o)
I don't really agree that there's not a real memory usage reduction. The memory is in fact allocated by the software, and hence isn't available to other programs, regardless of whether it is actively storing data or not. Further, I don't think it makes sense to compare it a snapshot load, because that is at best a temporary thing, not reflective of how the software will behave over time.
Well, the underlying mechanism of why we get lower effective memory allocation is the same - objects are placed better in memory. In case of snapshot load it just happens because objects of the same class are allocated one after another, while pool allocator guarantees such behavior. That's why I said
except now the benefit persists instead of degrading over time
.It is funny how the mind works. I wasn't consciously thinking about it, and yet I think I figured out why voting for old comments, when the data might need to be pulled from disk, might be faster than voting for fresh comments. Now it is kind of obvious, but since I expected reaching for archived comments to be two or three orders of magnitude slower than using multiindex, the results showing it was faster clouded the most likely reason.
When voting for archived comments all hived needs to do is to determine the target comment exists and then the work ends. On the other hand voting for fresh comments is actual voting, calculating mana to consume, strength of the vote, creation of related state object, updating data in multiple other objects, and then all that work is undone, redone once again during block production, undone again and performed yet again during block application. In case of archived comments it also needs to reach for it three times, but second and third time it reaches into cache. For a fair comparison we'd need to measure just access to comment in isolation from everything else.
If above turns out to be correct explanation of observed behavior, it would put me at ease, because it means there is no danger in using that optimization even on witness nodes, and even for saturated traffic with big blocks. I might still try to implement the in-memory version of archive if only to test whether Mariusz designed the interfaces correctly :o)
Thank you for the update. It will help us greatly as we engage here
We are glad to learn about these developments. Thank you
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge
You can view your badges on your board and compare yourself to others in the Ranking If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts:
This is a massive improvement and good updates. Thank you for the effort put in achieving this. You deserve an applaud 👏💪
one of the only ones who is still here from way back... you are a saint sir
!PIZZA
$PIZZA slices delivered: @danzocal(2/10) tipped @blocktrades sirsmokesalot96 tipped blocktrades
Come get MOONed!
Hello I just wanted to ask why are you delegating to @buildawhale which is using the power to do wrong 🤔
hes not doing wrong
I didn't know I mentioned you 😂 ok are you the owner of @blocktrades @buildawhale or @themarkymark
All I'm going to say is look at the comments in this post https://hive.blog/hive-135178/@kgakakillerg/a-trip-to-the-tower-of-london-2024-or-a-walk-around-the-tower-of-london-2024-part-31
Also @bpcvoter3 has exposed everything so please stop 🛑
Buildawhale is a big farm and downvoter 🤔 so what good are they doing please explain
I've also heard alot about you 😂
I'm everyone bro
😂😂😂😂😂 drag who ever you want into this 😂😂😂😂
I'm not the one dragging everyone who will listen in.
Keep playing you won't win 😂😂😂😂😂🤣🤣🤣🤣
ok
Here's a tip I think you need it 😂👍🏾 @tipu curate
Upvoted 👌 (Mana: 36/56) Liquid rewards.
Thanks bro
I'm not your bro I'm just showing you I don't care you are nothing more than a sad keyboard warrior with no life 😂😂😂🤣🤣🤣🤣🤣
OK bro
😂😂😂😂🤣 call me bro as much as you like it doesn't change anything I would never call you bro again
You started downvoting me for nothing just because you were exposed by people I know 😂😂😂😂😂
Can you tell everyone why I'm on your racist blacklist 🤔 and how do I get off your racist blacklist 🤔
Why can't @hivewatchers do nothing about you 🤔
Why is blocktrades supporting buildawhale and usainvote 🤔 it's not rocket science 😂😂😂😂😂😂
I'm not going to keep playing your game 😂😂😂😂
Cool story bro
I'm sure I've heard that before 🤔 😂😂😂😂😂😂
Phew, was getting worried
What you need to understand you have been exposed by people who know what they are doing the information is being sent to everyone 😂😂😂😂 https://hive.blog/hive/@solominer/re-bpcvoter1-sv4dwi 🤔🤔🤔🤔
ok
You can't make this shit up 😂😂😂😂😂 https://hive.blog/hive-148441/@hivewatchers/svh3fz
ok
😂😂😂😂😂🤣🤣🤣🤣🤣 https://hive.blog/hive-148441/@hivewatchers/svftu9
Nothing to say now 😂😂😂😂😂
https://youtu.be/SNzt9m3aV5g
https://youtu.be/FKOdVerIx8o
A Wake-Up Call: Accountability and Redemption in the Hive Ecosystem
To those who believe their actions on the Hive blockchain go unnoticed, it’s time to confront the truth. The power you think you wield through downvotes, vote farming, and manipulation is an illusion. True strength lies in doing what’s right, not in exploiting systems for personal gain or silencing others out of fear.
1. The Illusion of Power Through Exploitation
If you’re downvoting critics, running coordinated farms, or hoarding influence to control rewards, ask yourself: What does this say about your character? Manipulating Hive’s reward system doesn’t make you powerful—it exposes vulnerabilities. It shows a reliance on unethical tactics to feel significant in a space that was meant to empower everyone equally.
But here’s the reality: these actions won’t bring lasting satisfaction. At night, when you’re alone with your thoughts, do you truly believe what you’re doing is right? Regret has a way of lingering, and no amount of HIVE tokens can silence the voice inside telling you there’s a better path.
2. The Ripple Effect of Your Actions
Your choices don’t just harm Hive—they affect the people around you. Your online friends may follow along now, but loyalty built on shaky foundations crumbles quickly. And what about your family? Do they understand how much energy you pour into behaviors that ultimately hurt others and yourself?
Every action has consequences, and the deeper you dig, the harder it becomes to climb out. You’re not just harming Hive—you’re pulling those close to you into a cycle of negativity and regret. Is it worth it?
3. Change Before It’s Too Late
The Bilpcoin team isn’t here to attack individuals—we’re here to spark change. We’ve seen the evidence, and we know the impact of manipulative practices. But we also believe in redemption. There’s still time to step back, reflect, and choose a different path.
Ask yourself:
Hive deserves better. So do you.
4. Strength Lies in Doing What’s Right
Real power comes from integrity, not manipulation. Instead of focusing on how many tokens you can farm or how many users you can silence, consider how you can add value to the ecosystem. Create meaningful content, support genuine contributors, and advocate for reforms that benefit everyone—not just a select few.
If you continue down your current path, remember this: you are being watched. Many stay silent, but they see everything. Blockchain data doesn’t lie, and accountability always comes knocking—whether today, tomorrow, or years from now.
5. A Final Plea: Fix What Needs Fixing
It’s never too late to change course. Stop the farming, stop the toxicity, and start contributing to Hive in ways that uplift rather than tear down. If you’re struggling with why you’ve chosen this path, take a moment to reflect. Seek help if needed—there’s no shame in admitting you’ve lost your way.
The Hive community is watching, waiting, and hoping for positive change. Will you rise to the occasion, or will you let regret define your journey? The choice is yours.
From the Bilpcoin Team: We’re committed to helping Hive thrive by exposing the truth and fostering transparency. Let’s work together to build a stronger, fairer ecosystem—one where everyone has a chance to succeed.
What will your next move be?
Addressing the Downvote Issue on Hive: It’s Time for Real Change
Marky Mark Stop the manipulation game Lady Zaza Bpc AI Music
https://youtu.be/HpyaT029DRA
Let’s cut through the noise and address one of the most pressing issues holding Hive back from reaching its full potential: abusive downvotes. The Bipcoin team has consistently exposed the systemic problems that stifle growth on this platform, yet little to no action has been taken. If Hive is to thrive, we must confront these challenges head-on instead of engaging in empty discussions that lead nowhere.
Hive cannot grow if nothing changes. How can new users and smaller creators flourish when they are constantly suppressed by those who exploit the system? Many so-called “whales” contribute nothing meaningful to the ecosystem—instead, they focus solely on farming rewards through manipulative tactics like mass self-voting, spam content, or coordinated downvoting campaigns. These actions discourage genuine participation and erode trust within the community.
Blockchain transactions don’t lie. We’ve seen firsthand the shady methods used to game the system—whether it’s creating alt accounts to amplify votes, orchestrating downvote brigades, or hoarding influence without giving back to the network. Why does this behavior continue unchecked? Talk is cheap. Actions speak louder than words, and until concrete steps are taken to address these abuses, Hive will struggle to move forward.
We have repeatedly presented evidence of these practices, not to divide but to demand accountability. The truth is out there for anyone willing to look at the data. Yet, despite our efforts, the cycle persists. Today, we’re asking again: Can we finally talk about the real issues plaguing Hive? And more importantly, can we take decisive action to protect the integrity of this platform?
Do not let bullies destroy what makes Hive special. Abuse of power through downvotes and manipulation harms everyone—creators, curators, and the community as a whole. It’s time to fight back against these toxic practices and create an environment where all voices can be heard and rewarded fairly.
The Bipcoin team remains committed to exposing the truth and advocating for meaningful change. But we can’t do it alone. Together, let’s push for solutions that ensure Hive becomes a place of opportunity, fairness, and growth—for everyone.
Enough talk. Let’s see action.
Every curation reward from @buildawhale’s bot votes:
https://youtu.be/rQ33vKTd220
https://youtu.be/fYp5VwCT6Zs
https://youtu.be/Jq1OSzxVTxE?list=PLbH29p-63eW8RYYWw4wUNG87fDpzVma-c
The blockchain doesn’t forget—your actions are permanently recorded. So, enjoy your scam farm snacks now, because karma always delivers the bill.
https://peakd.com/@buildawhale/comments
https://peakd.com/@buildawhale/activities
Staked HIVE More Also known as HP or Hive Power. Powers governance, voting and rewards. Increases at an APR of approximately: ~13.07%
An unstake (power down) is scheduled to happen in 5 days (~4,742.795 HIVE, remaining 12 weeks) 61,991.841 Tot: 2,404,544.432 Delegated HIVE Staked tokens delegated between users. +2,342,552.591 Details HP Delegations RC Delegations Delegated: 0 HP Search No outgoing delegations Received: 2,342,553 HP @blocktrades 2,342,494 HP Aug 16, 2020 @nwjordan 24 HP May 27, 2018
We’ve exposed the truth repeatedly with ironclad evidence:
@themarkymark’s $2.4M scam farm.
@buildawhale’s daily grift.
@jacobtothe’s downvote army.
Downvote Army: Silencing truth-tellers like Bilpcoin.
Reward Yourself: Collecting daily paychecks from @buildawhale.
https://hive.blog/hive-124838/@themarkymark/re-jacobtothe-st6rk4
https://hive.blog/hive-167922/@usainvote/re-buildawhale-s6knpb
https://peakd.com/hive-124838/@bpcvoter1/st7wu1
https://peakd.com/hive-126152/@bpcvoter1/jacobtothe-you-re-not-god-no-matter-how-much-you-try-to-act-like-it-you-can-say-what-you-want-but-the-truth-remains-undeniable
https://peakd.com/hive-163521/@bpcvoter3/jacobtothe-this-was-way-over-rewarded-and-we-need-to-call-it-out-you-made-significant-rewards-from-a-post-that-was-created-using
https://peakd.com/hive-126152/@bpcvoter3/jacobtothe-let-s-address-the-core-issue-here-facts-speak-louder-than-rhetoric-we-ve-presented-verifiable-evidence-that
https://peakd.com/hive-126152/@bpcvoter3/think-carefully-about-your-next-move-because-this-issue-is-bigger-than-any-one-of-us-the-downvote-abuse-scamming-and-farming-by
https://hive.blog/hive-180505/@jacobtothe/re-denmarkguy-st9f4w
https://hive.blog/hive-124838/@themarkymark/re-peaksnaps-starqg
https://hive.blog/hive-167922/@darknightlive/re-bpcvoter2-bpc-dogazz
https://hive.blog/hive-126152/@tobetada/my-first-shitstorm#@azircon/re-tobetada-stbswq
https://hive.blog/hive-124838/@acidyo/re-themarkymark-stbthn
https://peakd.com/hive-168088/@bpcvoter3/jacobtothe-it-seems-we-re-going-in-circles-here-so-let-s-clarify-things-once-and-for-all-downvotes-don-t-erase-the-truth-the
https://peakd.com/hive-126152/@bpcvoter3/uwelang-it-seems-you-re-dismissing-the-discussion-without-engaging-with-the-actual-content-that-s-unfortunate-because-the
https://peakd.com/hive-121566/@bpcvoter1/std7q6
https://peakd.com/@uwelang/re-uwelang-stc2c3
https://hive.blog/hive/@themarkymark/what-the-fuck-do-witnesses-do-dk1
https://hive.blog/hive-167922/@usainvote/re-buildawhale-s5tysw
https://hive.blog/hive-124838/@steevc/re-meno-stdsv6
https://hive.blog/life/@crimsonclad/re-oldsoulnewb-sh93a2 You Can't Downvote The Truth Mc Franko Bpc Ai Music
https://youtu.be/dt5NeCofqwM
https://youtu.be/r1Yo-4fwjik
https://hive.blog/hive-126152/@jacobtothe/re-galenkp-stekn4
https://hive.blog/hive-126152/@jacobtothe/re-galenkp-steiss
https://youtu.be/dt5NeCofqwM
https://youtu.be/YvF7Tm-w3kQ
https://youtu.be/hBRluPW2M8s
https://hive.blog/hive-124838/@acidyo/re-web-gnar-stfc6c
https://hive.blog/mallorca/@azircon/re-abh12345-stfbyk
https://youtu.be/mQdgnt_45lU?list=PLbH29p-63eW8RYYWw4wUNG87fDpzVma-c
https://hive.blog/hive-167922/@uwelang/hpud-march-back-over-110k-hp
https://youtu.be/5wEl6BaB2RM
https://hive.blog/burnpost/@buildawhale/1742569802434998164
https://hive.blog/burnpost/@buildawhale/1742656202460243008
https://hive.blog/hive-124838/@themarkymark/re-peaksnaps-sti6pr
https://youtu.be/8zfuGpoO5do
https://hive.blog/hive-124838/@themarkymark/re-peaksnaps-stkfjj
https://hive.blog/burnpost/@buildawhale/1742742602124717444
https://hive.blog/burnpost/@themarkymark/re-kgakakillerg-stm4vv
https://hive.blog/hive-124838/@themarkymark/re-snap-container-1742690880-20250324t025303z
https://hive.blog/donation/@crimsonclad/re-nampgf-sti0nf
https://hive.blog/hive-148441/@azircon/re-curamax-stk5vq
https://hive.blog/hive-funded/@acidyo/re-alex-rourke-stluzc
https://hive.blog/hive-167922/@theycallmedan/re-finpulse-4gnm7o8h
https://hive.blog/hive-144400/@blocktrades/st110u
https://hive.blog/hive-148441/@usainvote/re-curamax-stly39
https://hive.blog/hive-167922/@usainvote/re-buildawhale-s75mua
https://hive.blog/burnpost/@buildawhale/1742829002430606097
https://peakd.com/hive-167922/@bpcvoter1/themarkymark-big-bad-marky-huh-lol-you-re-more-of-a-joke-than-anything-else-we-ve-exposed-your-actions-repeatedly-your-scamming
https://hive.blog/3dprinting/@themarkymark/meet-frank-ajt
https://hive.blog/burnpost/@buildawhale/1742483402094376205#@buildawhale/re-1742483402094376205-20250320t151108z
https://hive.blog/curation/@azircon/re-acidyo-stn05u
https://hive.blog/curation/@themarkymark/re-azircon-sto5b9
https://hive.blog/curation/@themarkymark/re-azircon-sto57d
https://hive.blog/curation/@themarkymark/re-meno-sto4ua
https://hive.blog/hive-124838/@themarkymark/re-snap-container-1742825520-20250325t020936z
https://hive.blog/hive-150342/@steevc/re-alessandrawhite-2025325t91011757z
https://hive.blog/curation/@themarkymark/re-meno-stodj2
https://hive.blog/curation/@themarkymark/re-meno-stod08
https://hive.blog/hive-13323/@moeknows/re-azircon-stnrc3
https://hive.blog/hive-112018/@jacobtothe/re-dynamicrypto-sto8ak
https://hive.blog/hive-13323/@azircon/re-moeknows-stofom
https://hive.blog/burnpost/@buildawhale/1742915402155436654
https://hive.blog/hive-13323/@azircon/re-moeknows-stoq3o
https://hive.blog/hive-13323/@bpcvoter1/stos6z https://hive.blog/technology/@themarkymark/re-kgakakillerg-stpzn7 https://hive.blog/hive-124838/@themarkymark/re-niallon11-stpzmi
https://hive.blog/hive-124838/@themarkymark/re-peaksnaps-stpx2t
https://hive.blog/hive-13323/@azircon/re-moeknows-stov0c
https://hive.blog/burnpost/@buildawhale/1743001802743637765
https://hive.blog/hive-124838/@meno/re-themarkymark-stqdnp
https://hive.blog/hive/@acidyo/hive-curation-a-response-to-a-post-about-my-previous-video
https://hive.blog/hive-124838/@themarkymark/re-meno-stqt8y
https://hive.blog/burnpost/@buildawhale/1743088202593656720 https://peakd.com/@bpcvoter1/to-the-hive-community-and-all-those-watching-our-journey
https://hive.blog/hive-124838/@themarkymark/re-snap-container-1743079680-20250328t095442z https://hive.blog/hive/@acidyo/re-freecompliments-strts3 https://hive.blog/hive-124838/@themarkymark/re-peaksnaps-sttukk https://hive.blog/hive-124838/@anderssinho/re-themarkymark-stu0qr https://hive.blog/hive-124838/@oldsoulnewb/re-themarkymark-stu1t3 https://hive.blog/hive-124838/@acidyo/re-themarkymark-stu3fk https://hive.blog/hive-124838/@wiseagent/re-themarkymark-stu6fz https://hive.blog/hive-124838/@stayoutoftherz/re-themarkymark-stu6r7 https://hive.blog/hive-124838/@themarkymark/re-peaksnaps-stufv1 https://hive.blog/hive-124838/@themarkymark/re-godfish-stujq7 https://hive.blog/hive-124838/@themarkymark/re-coininstant-stugcr https://hive.blog/hive-124838/@themarkymark/re-snap-container-1743183360-20250329t130534z
https://peakd.com/hive-167922/@bpcvoter3/exposing-hive-s-hidden-manipulators STOP #buildawhalescam #buildawhalefarm
https://youtu.be/8zfuGpoO5do
https://peakd.com/hive-126152/@bpcvoter3/exposing-the-whales-on-the-hive-blockchain
https://youtu.be/oosKfZIm400 https://peakd.com/hive-167922/@bpcvoter3/the-hive-unveiled-a-journey-through-shadows-and-light
Stand Together Hive Keni & Mc Franko Bpc Ai Music
https://youtu.be/5xTN1BR_GSI
https://peakd.com/hive-126152/@bpcvoter3/spaminator-why-the-silence-on-the-buildawhale-scam-farm-the-truth-deserves-action
For more insights into blockchain transparency and accountability, visit Bilpcoin’s Publish0x page.
I forgot to say I'm not your bro just remember you started this not me all you need to do is stop downvoting me
You are that sad that you are now downvoting my comments 😂😂😂🤣🤣🤣🤣
Wow! This is an incredibly comprehensive update. The dedication of the Blocktrades team to improving Hive is really impressive. Looking forward to seeing these changes propagate across the network. Thank you for the detailed report.
You have really done a great job, you should be given a separate crown for that.Believe me, this platform really needs people like you.
Great efforts and progressive development.
It's always good to hear of optimisations as that ought to help Hive scale up. Hive has been pretty stable for a long time now and that is vital. I know you do lots of testing, so I hope that continues.
Keep up the good work.
I wanted to understand this post in a way that was a bit easier for me and others so I asked Gemini for some help... figure this could help someone else so i'm pasting it here: (Obviously if there are any misunderstandings feel free to correct)
Okay, Hive users, let's break down this tech update from Blocktrades! Think of this as upgrading the engines and tools that make Hive work behind the scenes.
Here’s the lowdown on what Hive API stack v1.27.11 means for you:
The Big News: Hive Just Got Faster!
api.hive.blog
(switching soon) andapi.syncad.com
.What Else is Cooking? (Key Improvements)
Smoother Performance & Less Hiccups:
Better Ways to Find Content (Coming Soon!):
Wallet and Account Improvements:
Behind-the-Scenes Foundation Work:
In a Nutshell:
This update is all about speed, efficiency, and building for the future. You should start noticing faster apps and websites soon. Big improvements like smarter search and MetaMask integration are on the horizon. The Blocktrades team (and other Hive developers) are working hard under the hood to make your Hive experience better!
Generally correct, but the speedup notifications updates won't be visible to users: the speedup is in the time it takes to build the table data (it used to take around 1/2 second per block, loading the api node server more than it should). But this is just a speedup in updates to the table, not the rate at which the data is fetched from the table (that will be the same speed, so no difference in speed).
Thanks for the nuance information on that.
It's also worth pointing out it's not a complete summary either :-) For example, it doesn't even mention the new health checker component or workerbee at all as far as I can see. It seems like a good summary for a non-dev, but missing a lot of important info for devs.
yes i did give it as an audience non-devs in this case. But good to point out there is more for devs they'll want to read the full post.
Such cool developments happening! Thanks :)!
We have well over a hundred active Hive witnesses, but only around 10 active API nodes.
Should we have more API nodes?
I think around 10 is enough for now. With the new stack, any single one of them can easily handle all of the current traffic load of Hive, so adding lots at the moment is just overkill in terms of resource usage. Right now, the best thing is to have them spread out across the globe so that Hiveans can get Hive data faster wherever they are located.
!PIZZA
Great development down here 👍😍
Sir your journey is so smooth for us
Congratulations for 1000 vote .. sir please help me //Build a whales 🐋 why unvote me
Congratulations @blocktrades
Hello, good day @blocktrades,
Is there a way to get help from this account/project?
Okay, from the look of things, it seems almost impossible to do that because it’s obvious this account is meant for a project. However, there’s no harm in trying, as I’ve seen this account being supportive of the growth of the Blockchain.
We are humbly seeking support for The Comedy Club Community. It’s a newly created community designed to fill in the missing piece of laughter on the Blockchain. The main aim is exactly that—with no external intentions.
To make this soliciting request brief, Please help us in any capacity—perhaps through HP delegation or by extending your hand in curation support for entries to our contests, under your terms and conditions.
Sorry, I know this is supposed to be a private chat, but I thought it would be better (as an attempt) to make this request here. Maybe we could continue on Discord if you're okay with that. Also, kindly forgive my poor manner of making this request.
Many thanks!🙌