Tag Archives: distributed

DotNet multiple AI computing agents work as a whole to provide services in a distributed and decentralized way.

DotNet multiple AI computing agents to work as a whole to provide various services in a distributed and decentralized way.

From my Telegram chat group:

I have a AI based, using options data as input, system for stock trading. I would like to deploy it it singularityNET. Can someone guide me how to go about it???

You can check out the project GitHub for a deep look: https://github.com/singnet/singnet

 

SNet (Alpha 1.0) will be completed within the next month (Nov 2017).

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Lua stored procedure with Bloomberg distributed open source database

Lua stored procedure with Bloomberg distributed open source database

Who knew there was a Bloomberg distributed open database

https://github.com/bloomberg/cartodb

There are other neat projects from Bloomberg as well

https://github.com/bloomberg

https://github.com/bloomberg/bqplot

https://github.com/bloomberg/bde

By the way, Redis still better with Lua embedding scripting

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Implement machine learning distributed deep learning network on Spark

Implement machine learning distributed deep learning network on Spark

Would this be fairly fast since it is supposed to in memory

From Super Facebook fan Nuno so thanks to him

http://www.datasciencecentral.com/profiles/blogs/implementing-a-distributed-deep-learning-network-over-spark

Join my FREE newsletter to see if this ever gets implemented in my automated trading

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

A Trading Machines Distributed fault tolerance for real time trading

Distributed fault tolerance for real time trading

I am sure this is a smart way but when I see FIX is the faster way. No, as Ernie Chan confirms, always choose the broker API vs FIX as there is no difference in speed. Ask Lime Brokers on that one. All in all another interesting way to implement this but there is always a better one. Enjoy your off roading day now.

Thanks to the NYC Contact for sending.

Projects

Join my FREE newsletter to see how we do it

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

What is your preferred profiling/optimization/debugging tool(s) for large distributed/shared and/or hybrid applications?

What is your preferred profiling/optimization/debugging tool(s) for large distributed/shared and/or hybrid applications?

==For debugging, we use the Dartboard Method. Place the source code on the board, toss a dart, start looking in the vicinity of the procedure with the dart stuck in it.
==I’m not sure I’d call it preferred yet, having tried some monitoring tools I’m finding I want to know what platform resources (disk, memory, cache, cpu) are being used alongside my app. I’m currently trailing Hyperic HQ – and instrumenting Java apps via jmx. It looks promising.
Hyperic gives reasonable monitoring capabilities for 10s of boxes, not sure about 100s or 1000s.
So basic approach: * Instrument, to identify problems * Monitor from one platform, to problems servers, services * Drop back to standard debug, profiling
Very interested to know others experiences of tooling.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

BigData : How data distributed in Hadoop HDFS (Hadoop Distributed File System)

BigData : How data distributed in Hadoop HDFS (Hadoop Distributed File System)
Apache Hadoop framework uses Google MapReduce model and Google File system logic’s. In Hadoop, Data will be split into chunks and distributed across all nodes in cluster. This concept is inherited from Google file system, In hadoop we mention it as HDFS (i.e. Hadoop Distributed File System). While loading data into HDFS, it start distributing to all nodes based on few parameters. Here will see two important parameter need to consider for better performance.

1. Chunk size (dfs.block.size(in bytes)) – 64MB,128MB,256MB or 512MB. its preferable to choose size based on our input data size to be process and power of each node.

2. Replication Factor (dfs.replication=3) – by default its 3. means data will be available in 3 nodes or 3 times around cluster. In case of high chance of failure in nodes, better to increment replication factor value. Need for data replication is, if any node in cluster failed, data in that node cannot be processed, so will not get complete result.

For Example, to process 1TB of data with 1000 nodes. 1TB(1024GB)* 3 replication factor = 3072 GB of data will be available in all 1000 node cluster. we can specify chunk size based on our node capability. if node has more than 2GB memory(RAM), then can specify 512MB chunk size. so one node TaskTracker will process one chunk at a time. If its a dual core processor, one node will process 2 chunks at a same time. so specify chunk size based on memory available in each node. Recommended not to use NameNode(Master) also as a Datanode, else that single node overloaded with task of both TaskTracker and JobTracker.

Will that data distributed equally in hadoop cluster’s node?

No, it’s not distributed like 3GB in each node. some node will have 8GB of data, other node will have 5GB, and 1GB.. and so on. but node will have complete chunk. it wont be distributed like half chunk here and there.

In Upcoming posts we will see about more hadoop parameter to improve cluster performance. If you like this post, please click +1 button below to recommend this page and click ‘like’ button to get updates in facebook(Only once in a week).
http://cloud-computation.blogspot.com/2011/07/bigdata-how-data-distributed-in-hadoop.html

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!