Reach out to us to discuss the prospects of using our platform, collaborating with us, working together, or anything else
1212 Broadway Plaza, Ste #2100 Walnut Creek, CA 94596
Cras tincidunt lobortis feugiat vivamus at morbi leo urna molestie atole elementum eu facilisis faucibus interdum posuere.
We store the data about materials in a central database and every user has a copy that he/she can contribute to and modify at will. We support both private and public scenarios. By default all data you create under personal account is public and accessible to other users. You can choose to opt out and have your data in the private domain.
Chances are that these days your data may be more secure in the cloud than on your company's own servers. One look at the number of case studies and companies that use cloud computing for their mission-critical activities serves as a better proof than thousand words: link. If your enterprise is not on the list yet, you may be missing out.
We strive to be provider-agnostic and let our users select the details about the hardware. Every provider has its strong and weak points and we would like to be able to leverage those appropriately to maximize the benefits for our users.
With the cloud, there is really no limit for high-throughput calculations. We have scaled to 35,000 cores at most in test regime, and regularly use ~10,000 cores in production. For a single communication-intensive job (ie. large cell, hybrid functionals, GW) the low-latency interconnect options on some cloud providers allow to scale to ~16-32 nodes (see Fig. 2 from this recent manuscript). Alternatively, there are large memory nodes that we found very helpful for GW, for example. We presently administratively limit each cluster to 200 nodes (each with 16-36 cores) and allow users to use 10 nodes per single job.