• LOGIN
  • No products in the cart.

Stratoscale and hyper-converged operating system software

About Stratoscale

At Stratoscale, we’re focused on how technology can be leveraged to help IT teams make better and more profitable usage of existing infrastructures. We know that data needs are growing at an ever-increasing pace, so we’ve build a hardware-agnostic and hyper-converged software solution that lowers the cost of scale-out and allows your IT infrastructure to keep up with business growth.

The Product

Stratoscale is a hyper-converged operating system software that optimizes large data center operations. Stratoscale’s distributed software uses the rack as its design paradigm, in contrast to the traditional, single-server paradigm – creating a totally new foundation software stack. With system-learning algorithms that allow for increasingly smarter capacity planning and resource utilization, Stratoscale’s operating system software enables companies to maintain an infrastructure that maximizes efficiency and operational simplicity at scale.

Stratoscale provides solutions for a variety of use cases and business problems across industries.

Big Data

Meeting the Needs of Growing Analytical Demands Requires a New Software and Hardware Approach.

Today’s mobile and cloud era involves an ever-growing number of devices and connections to the Internet. This creates a new challenge for IT environments, which need to handle and process the masses of data being produced.

The challenge is to leverage the large amount of structured, semi-structured and unstructured data that is being gener- ated, especially in connection with e-commerce, social media and the Internet of Things (IoT). The idea is that more data should lead to more accurate analyses, leading to better decisions and greater operational efficiencies.

However, most IT organizations are finding themselves faced with data sets that are too immense and complex to be processed using most relational database management systems and desktop statistics and visualization packages. As the amount of data continues to grow exponentially, organizations increasingly rely on solutions – such as Hadoop and Cassandra, which are built to handle immense data volumes – to present meaningful and actionable results.

These emerging software analytics platforms often share one commonality: They rely on distributed and scale-out architectures.

Unlike traditional data analytics solutions, these new frameworks perform parallel queries that run concurrently across tens, hun- dreds, or even thousands of servers. Successful implementations often hinge on mapping out the right strategy for deploying and managing the infrastructure necessary to support this new breed of analytics.

A fully virtualized infrastructure can provide the agility needed to provision additional compute instances dynamically while also simultaneously allowing non-analytics workloads to run side by side. This negates the requirement to purchase and manage application-specific hardware. In addition, policy-based configuration practices provide the delivery of workloads in a matter of minutes, providing a new level of control over resource placement.

With its innovative rack-scale architecture, Stratoscale provides the capabilities needed to confidently move ahead with any big data initiatives. By optimizing the deployment and management of virtualized Hadoop installations, Stratoscale allows organizations to get back to focusing on using big data insights to improve decision-making and increase productivity.

Hyper-Convergence

Time to Optimize Data Center Compute, Storage and Net- working Resources.

Converged infrastructures typically combine siloed technolo- gies (such as storage and compute) into a single platform, cre- ating an opportunity to significantly reduce both CAPEX and OPEX costs. Maximizing this opportunity, however, should not come at the expense of workload performance or man- agement complexity.

Integrating compute, storage and networking resources can reduce IT costs, improve efficiency and create a more agile en- vironment.

Basic converged systems bring storage and virtualization technologies together on a single hardware platform. Man- agement applications are used to loosely bind the two environments together for management and provisioning con- venience. Some costs are reduced by having virtualization and storage running on a single hardware platform (usually a dedicated appliance); however, there are still two disparate operating environments, and therefore the system is not truly converged or optimized.

True Hyper-convergence

convergeA hyper-converged infrastructure dramatically reduces these “siloed technologies”, presenting all data center components in a ho- listic manner. The platform acts as a single infrastructure that runs all workloads and applications. The servers, storage, networking and even the virtualization stack are not only bundled together, but completely integrated and transparent to the administrator.

In a truly hyper-converged environment, rack-wide pools (or pools as wide as the data center) of compute, storage and networking resources are created on a single platform. Virtualized and containerized workloads are fully orchestrated and harmonized so that the problem of resource contention or interference is bypassed. A workload requiring heavy I/O won’t impact adjacent workload performance.

Stratoscale’s software creates an environment where the intended benefits of hyper-convergence can be realized. By allowing integrated technologies to be managed as a single, holistic system, the Stratoscale solution creates a selfoptimizing infrastructure which automatically distributes workloads to run on the best matching hardware across the cluster, while constantly measuring and rebalancing workloads as required. When workload requirements change and rebalancing is needed, sub-second migration occurs, moving the workload to other, less busy nodes.

Stratoscale’s all-software solution is built around the BYOH principle. “Bring your own hardware” allows organizations to seamlessly integrate existing compute, storage and networking hardware systems, allowing for unprecedented operational simplicity, scalability and time to value.

DevOps vs. IT

devopsCreating a “DevOps” Centric IT Culture and Infrastructure.

New application development paradigms create a significant challenge for IT: How does IT provide support for new infrastructure requirements without impacting legacy workloads?

 

The DevOps paradigm is designed to create an agile, highly responsive environment for application development, testing, deployment and operations. This “brave new world” moves traditional IT and application developers closer together, creating significant opportunity for organizations to focus on creating a competitive advantage.

In a DevOps environment, application developers focus on the code they write. A single application can be created and wrapped in a container such as Docker and rapidly deployed throughout the infrastructure, scaling to thousands of instances. Orchestration tools communicate with the infrastructure assigning compute, storage and networking resources as needed. The primary motivator for the developers is the performance and scalability of the application that they wrote.

This approach is very different from the traditional IT view. Traditionally, IT has been primarily concerned with the utilization of individual resources or silos. Server virtualization has been lever- aged to improve server utilization by running multiple virtual environments on a single server platform. Separate storage solutions deliver the data needed; and separate network technologies are used to connect everything together. While this approach has been somewhat effective in the past, the world of DevOps requires a much more agile, elastic and hyper-converged approach to the infrastructure.

In a DevOps environment:

  • Infrastructure must instantly be able to handle any type of workload (virtualized or container-based).
  • Provisioning of infrastructure must be automatic and happen in 1-2 seconds, as compared to today’s manual processes involv-ing days or even weeks.
  • Resource utilization must be monitored in real time with balancing taking place in a sub-second manner.Stratoscale has developed a rack-scale, hyper-converged software solution which delivers all of the requirements of a DevOps infra- structure environment. By supporting virtualized and containerized workloads, while converging compute, storage and networking resources, and orchestrating workload deployments utilizing sophisticated scheduling algorithms, we deliver a “run anything, store everything” environment that is ideally suited to DevOps.

    The Cloud and OpenStackTM

openstack-cloudOpenStack is Paving the Way for Private and Public Cloud Standardization. OpenStack is an open source software solution that provides an Infrastructure-as-a-Service (IaaS) platform for private and public cloud deployments. As cloud computing continues its rise in the world of IT, OpenStack has become, arguably, the leader and de-facto standard in the open source community. While still a relatively new technology, industry support for OpenStack has been impressive and is creating opportunities for new and existing vendors to market their software distributions, appliances, public clouds and even consulting services.With support from hundreds of companies from around the globe, the community of open source developers has shepherded OpenStack to its current form. By leveraging other existing open source components, OpenStack’s core platform allows data centers to pool together large compute (Nova), storage (Swift and Cinder) and networking (Neutron) resources into a single framework. Additional services such as user and image management round out a suite of software services that enable data centers to be DevOps friendly and function as a self-service cloud-computing infrastructure.

An open source alternative to more traditional systems, OpenStack has piqued the curiosity of those tied to legacy and proprietary solutions. The promise of high levels of customization, which are sometimes necessary to more closely match business needs, is extremely appealing and avoids the dreaded vendor lock-in. In addition, the collaborative nature of open source projects means indi- vidual companies don’t have to carry the full burden and costs of development by themselves. Most important, however, is OpenStack’s potential for drastically cutting data center expenses – including licensing costs for virtualization and ongoing maintenance.

But perhaps the biggest benefit OpenStack has brought to the industry is the unofficial standardization of core cloud computing interfaces. By rallying support across software and hardware industries, OpenStack is now the de-facto API standard for private and public clouds (alongside AWS). This level of abstraction is vital to the health of the project’s ecosystem, allowing partners to provide value-added differentiation while guaranteeing interoperability with other vendors.

With hundreds of corporations, service providers and global data centers currently considering OpenStack solutions, the real ques- tion may be how to successfully leverage OpenStack in order to maximize the efficiency of the data center.

Stratoscale takes the guesswork out of deploying OpenStack clouds of all sizes.

Stratoscale is a hardware-agnostic software stack that is 100% compatible with OpenStack. By converging compute, storage and networking into resource pools available across the rack or data center, Stratoscale’s self-optimizing infrastructure automatically distributes all physical and virtual assets and workloads in real time, delivering rack-scale economics to data centers of all sizes with unparalleled efficiency and operational simplicity.

Virtual Machines vs. Containers

featured-container-vs-vm-655x325Two virtualization technologies headed for a crossroads in a fight for dominance in the data center.

Today, nearly all IT organizations have come to realize the value and cost savings afforded byvirtualization technology. The premise is simple: Consolidate multiple applications running on individual (and often times underutilized) servers onto a single server, thus reaping tremendous hardware savings and cutting other op- erational expenses.

The technology, while extremely complex, is now readily available from both commercial vendors and open source solutions like KVM and Xen. These hypervisors – the software that provides the virtualization functionality – are responsible for emulating the physical server hardware, namely the processor, memory, and networking. In addition, they enable the simultaneous operation of multiple operating systems (referred to as virtual machines) and their applications.

While cost savings often drive virtualization projects initially, enterprises and service providers alike now depend on virtualization for their public and private cloud infrastructure because of the flexibility and security it provides.

Recently, however, an emerging technology has been attracting tremendous interest as an alternative to traditional virtualization technology: Containers. While currently only available for Linux-based environments, containers resolve some of the problems typically associated with hypervisors and virtual machines. Because of their fundamentally different architectures, containers do not require a hypervisor and therefore provide better performance than applications running in virtual machines. This same architectural difference also results in faster provisioning of resources and quicker availability of new application instances. For organizations embracing a DevOps culture, this is a great fit, allowing development teams to streamline their develop-test-production processes.

But containers are not a silver bullet for all IT infrastructure needs. While they are a perfect fit for deploying homogenous workloads (and similar types of workloads) like web applications at scale, container workloads on the same physical server share a single operating system and are therefore less appropriate for multi-tenant environments, because of potential security risks.

Do we really have to choose? Stratoscale allows you to run both containers and VMs on the same infrastructure.

Hypervisors and containers are great technologies that each have a place in the data center. The challenge is how to manage these two vastly different architectures within a single infrastructure, instead of as individual silos within the data center.

Stratoscale has developed a radically new approach that efficiently scales both virtualized and container-based workloads across a single, scale-out infrastructure, allowing enterprises and service providers to compete more efficiently through predictable performance and better economics.


 

The Founders

untitled-2470-n-1Ariel Maislos (CEO) brings over twenty years of technology innovation and entrepreneurship to Stratoscale. After a ten-year career with the IDF, where he was responsible for managing a section of the Technology R&D Department, Ariel founded Passave, now the world leader in FTTH technology. Passave was established in 2001, and acquired in 2006 by PMC-Sierra (PMCS), where Ariel served as VP of Strategy. In 2006 Ariel founded Pudding Media, an early pioneer in speech recognition tech- nology, and Anobit, the leading provider of SSD technology acquired by Apple (AAPL) in 2012. At Apple, he served as a Senior Director in charge of Flash Storage, until he left the company to found Stratoscale. Ariel is a graduate of the prestigious IDF training program Talpiot, and holds a BSc from the Hebrew University of Jerusalem in Physics, Mathematics and Computer Science (Cum Laude) and an MBA from Tel Aviv University. He holds numerous patents in networking, signal processing, storage and flash memory technologies.

The Team

Stratoscale-st21-9-14-1391nEtay Bogner (CTO) brings over twenty years of technology innovation and entrepreneurship to Stratoscale. After eight years of working for sev- eral technology R&D startups, in 1999 Etay founded SofaWare, a Net- work Security company building firewall, VPN and networking applianc- es. SofaWare was acquired by Check Point (CHKP) in 2011. In 2006, Etay founded Neocleus, building the first client virtualization product, and pioneering device pass-through technologies. Neocleus was acquired by Intel (INTC) in 2010. Etay served as a strategist for Intel, commercial- izing client virtualization, before leaving the company to found Strato- scale. Etay holds a BSc from Tel-Aviv University in Computer Science and Mathematics.

Stratoscale’s added value is its founding team, which includes some of the most sought-after talent in the Industry – a group that brings to the table prior experience at companies including IBM, Oracle, SAP, Cisco, Google, Apple, VMware and Red Hat. The compa- ny currently has the backing of first class investors such as Battery Ventures, Bessemer Venture Partners, Intel Capital, Cisco or SanDisk.


Source of the article: BSD Magazine, Issue 66


1 responses on "Stratoscale and hyper-converged operating system software"

  1. “Containers. While currently only available for Linux-based environments, containers resolve some of the problems typically associated with hypervisors and virtual machines.”

    Say what? While the latter half of that sentence is entirely correct, the former is about as wrong as can be!

    FreeBSD has had Jails since forever. FreeBSD jails provided the foundations of what became Solaris “zones” more than 10 years ago in SunOS 5.10 (Solaris 10 for those not in the know). After the introduction of resource constraints, “zones” with resource constraints became known as…wait for it…”containers”. So Solaris engineering has been perfecting containers for more than 10 years now.

    Linux containers (in the practical sense) are not as mature, stable or secure as a Solaris container. Now before anyone gets any ideas I’ve got NO love for Oracle! That said, there is enough mis-information, and dis-information in the world already, so I felt the need to correct the oversights on the origins of containers.

    Perhaps they meant “Docker is only available on Linux”, well that’s not correct either. Docker has been/is being ported to Solaris 11 and FreeBSD. Docker will also be appearing in the next major release of Windows Server.

    were based on zones introduced , and were later renamed “containers” when many Linux distros were still trying to get chroot correct!

Leave a Message

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© HAKIN9 MEDIA SP. Z O.O. SP. K. 2013