Stormstarter-Overview
Storm的starter例子, 都給的很有誠意, 不光是例子, 而是可以直接使用在實際的場景裡面.
並且提高一些很有用的tool, 比如SlidingWindowCounter, TimeCacheMap
所以starter可以說是提高了基於storm程式設計的框架, 值得認真研究一下…
ExclamationTopology, 基本的Topology
沒有什麼特別的地方, 標準的例子
/** * This is a basic example of a Storm topology. */ public class ExclamationTopology { public static class ExclamationBolt extends BaseRichBolt { OutputCollector _collector; @Override public void prepare(Map conf, TopologyContext context, OutputCollector collector) { _collector = collector; } @Override public void execute(Tuple tuple) { _collector.emit(tuple, new Values(tuple.getString(0) + "!!!")); _collector.ack(tuple); } @Override public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word")); } } public static void main(String[] args) throws Exception { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("word", new TestWordSpout(), 10); builder.setBolt("exclaim1", new ExclamationBolt(), 3) .shuffleGrouping("word"); builder.setBolt("exclaim2", new ExclamationBolt(), 2) .shuffleGrouping("exclaim1"); Config conf = new Config(); conf.setDebug(true); if(args!=null && args.length > 0) { conf.setNumWorkers(3); StormSubmitter.submitTopology(args[0], conf, builder.createTopology()); } else { LocalCluster cluster = new LocalCluster(); cluster.submitTopology("test", conf, builder.createTopology()); Utils.sleep(10000); cluster.killTopology("test"); cluster.shutdown(); } } }
RollingTopWords
實現了TopN和滑動視窗功能
這個例子的Bolt實現的很有指導意義, Storm starter – RollingTopWords
SingleJoinExample
通過TimeCacheMap, 實現基於memory的join, Storm starter – SingleJoinExample
BasicDRPCTopology, ReachTopology
關於DRPC的例子, 參考Twitter Storm – DRPC
TransactionalGlobalCount, TransactionalWords
Transactional Topology, Storm – Transactional-topologies
TransactionalGlobalCount比較簡單, 看看TransactionalWords
在對word計數的基礎上, 加上word count分佈統計資訊
public static Map<String, CountValue> COUNT_DATABASE = new HashMap<String, CountValue>(); public static Map<Integer, BucketValue> BUCKET_DATABASE = new HashMap<Integer, BucketValue>();
使用Count_Database來記錄word的計數
使用Bucket_Database來記錄word計數的分佈, 比如, 出現0~9次的word有多少, 10~20的word有多少
public static class KeyedCountUpdater extends BaseTransactionalBolt implements ICommitter
對於KeyedCountUpdater和前面的簡單例子沒有啥大區別, 在execute時對word進行count, 在finishBatch時, 直接commit到Count_Database
輸出, new Fields(“id”, “key”, “count”, “prev-count”), 其他都好理解, 為啥需要prev-count? 因為在更新Bucket_Database, 需要知道該word的bucket是否發生遷移, 所以必須知道之前的count
Bucketize, 根據count/BUCKET_SIZE, 算出應該屬於哪個bucket
如果新的word, 直接在某bucket +1
如果word的bucket發生變化, 在新的bucket +1, 舊的bucket –1
如果沒有變化, 不需要輸出
public static class Bucketize extends BaseBasicBolt { @Override public void execute(Tuple tuple, BasicOutputCollector collector) { TransactionAttempt attempt = (TransactionAttempt) tuple.getValue(0); int curr = tuple.getInteger(2); Integer prev = tuple.getInteger(3); int currBucket = curr / BUCKET_SIZE; Integer prevBucket = null; if(prev!=null) { prevBucket = prev / BUCKET_SIZE; } if(prevBucket==null) { collector.emit(new Values(attempt, currBucket, 1)); } else if(currBucket != prevBucket) { collector.emit(new Values(attempt, currBucket, 1)); collector.emit(new Values(attempt, prevBucket, -1)); } } @Override public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("attempt", "bucket", "delta")); } }
BucketCountUpdater, 也就是將上面的bucket的更新, 更新到Bucket_Database
Topology定義如下,
MemoryTransactionalSpout spout = new MemoryTransactionalSpout(DATA, new Fields("word"), PARTITION_TAKE_PER_BATCH); TransactionalTopologyBuilder builder = new TransactionalTopologyBuilder("top-n-words", "spout", spout, 2); builder.setBolt("count", new KeyedCountUpdater(), 5) .fieldsGrouping("spout", new Fields("word")); builder.setBolt("bucketize", new Bucketize()) .noneGrouping("count"); builder.setBolt("buckets", new BucketCountUpdater(), 5) .fieldsGrouping("bucketize", new Fields("bucket"));
WordCountTopology, 多語言的支援
分別使用ShellBolt和BaseBasicBolt來宣告使用python和Java實現的Blot
public static class SplitSentence extends ShellBolt implements IRichBolt { public SplitSentence() { super("python", "splitsentence.py"); } @Override public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word")); } @Override public Map<String, Object> getComponentConfiguration() { return null; } } public static class WordCount extends BaseBasicBolt { Map<String, Integer> counts = new HashMap<String, Integer>(); @Override public void execute(Tuple tuple, BasicOutputCollector collector) { String word = tuple.getString(0); Integer count = counts.get(word); if(count==null) count = 0; count++; counts.put(word, count); collector.emit(new Values(word, count)); } @Override public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("word", "count")); } }
在定義Topology的時候, 可以直接將ShellBolt和BaseBasicBolt混合使用, 非常方便
TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("spout", new RandomSentenceSpout(), 5); builder.setBolt("split", new SplitSentence(), 8) .shuffleGrouping("spout"); builder.setBolt("count", new WordCount(), 12) .fieldsGrouping("split", new Fields("word"));
本文章摘自部落格園,原文釋出日期:2013-05-24