GET STARTED WITH FASTSCORE QUICKLY
ENSURE YOUR MODELS RUN IN PRODUCTION USING SCHEMAS
CREATE, LOAD, MANAGE, AND EDIT DATA STREAMS
CREATE A LIBRARY OF ASSETS TO STREAMLINE DEPLOYMENT
FASTSCORE INTEGRATES WITH ORCHESTRATION TOOL KUBERNETES
INSTALL DOCKER AND FASTSCORE IN 6 MINUTES!
RUN ANY MODEL ANYTIME REGARDLESS OF ITS NATIVE DATA SCIENCE LANGUAGE
MAXIMIZE FLEXIBILITY TO DESIGN MODELS AND ENSURE THEY ARE READY FOR DEPLOYMENT.
SIMULTANEOUS ANALYTIC ITERATION AND DEPLOYMENT WITH FASTSCORE.
How do you validate that the right information is being processed into your model?
Ensuring that all model information is accurate and robust is key to getting the output you want, but how do you make sure that the model and data (input / output streams) are passing through validation checks?
By using schemas, you can ensure that the right data is flowing in and out of your model in production and plan for model change and maturity, by separating data locations and data transport methods. Give your deployment the complete accurate flexibility of a modular set-up, and pinpoint issues and make sure your model works and allows for changes quickly in FastScore. In this video, you will learn how to.
- Delete, upload and create new schemas through the dashboard and CLI
- Format new and update schemas
- Utilize schema features in Model Manage
- Enter Schema information into models with smart comments
- Write input and output schemas directly into FastScore using AVRO format, or upload a file
- Reference the schemas in the model and streams for specific runs
These best practices will ensure that your new and mature models are being run correctly every time.
Incorporate different input and output streams into the FastScore engine and how streams work when deploying your model into production.
A stream contains all the information about feeding data into and out of your model. The stream reads messages from an underlying transport which could be a RESTful call, HTTP, Kafka, a file, or however you choose to transport your data. In this video you will learn how to manage streaming capabilities within FastScore.
- How streams play into your model deployment process
- Upload, use, and interchange streams within your model in an engine
- Validate your streams against a schema
- Interchange stream transport types from JSON to Kafka
- Different ways to look at and add streams through the CLI and dashboard.
- Streaming and batched data options within FastScore
These capabilities power your ability to easily deploy your model. Want to learn more about streaming, reference all documentation here.
Listen to George Kharchenko talk about model management capabilities in FastScore. Model management is built into the FastScore dashboard to help your team collaborate, centralize model repositories, and stores all models in one place. Your models are unique to your business, FastScore allows you to create a library of assets so you can pick and choose which ones you want to deploy.
- Upload any model into model manager
- Access and modify model information in the FastScore CLI
- Preview model code and make modifications
- Match schemas to models
- Add input and output streams
Keep organized and run any model the way you want from the FastScore dashboard or CLI.
Matthew Mahowald Lead Data Scientist speaks about how FastScore integrates with orchestration services like Kubernetes, Mesosphere, DC/OS and Cloud Foundry. In this video Matthew will show you how FastScore microservices dashboard runs through Kubernetes. Walk through the Kubernetes dashboard step-by-step to integrate FastScore.
- A node is a Virtual Machine or physical computer that serves as a worker machine in a Kubernetes cluster
- Each node has a “Kubelet” which is responsible for handling container operations on that machine
- Pods (Collection of 1 or more Docker containers) live on nodes and are controlled by the Kubelet’s
- Services are the “front end” to pods, and provide the layer that external users or other applications use to interact with pods
- FastScore’s componants (Dashboards, Engine, Model Manage…) Can each be structured as a service
Using orchestration tools will allow you scale up FastScore and run complex features right from the Kubernetes dashboard.
Want to install FastScore but not sure how to get it up and running? Watch this 6 minute instructional video with our Product Manager, Rehgan Avon as she walks us through how to install both Docker and FastScore into your blank system. The video will lead you through what prerequisites you will need, as well as how to configure the FastScore fleet, and more.
- How to install python and set-up tools
- Installing Docker and FastScore CLI
- Launch model manage and install the FastScore Fleet
Docker Containers allow for easy install and set-up of FastScore. Once installed you can view the dashboard and start scoring models in minutes.
Did you know FastScore, our agnostic analytic deployment engine, can run any model, any time, regardless of its native data science language? Watch this 4 minute video, and see a gradient boosting machine model built in python, and the same model built in R, deployed to an AWS instance with three easy steps. With the right abstractions, and leveraging microservices, you can easily deploy a model simply by:
- Loading models in any language into the scoring engine.
- Selecting an input stream that delivers data into the model.
- Selecting an output stream for where the data goes after scoring has been completed
Supported through both the FastScore dashboard and the command line, you are now able to load and started scoring models in minutes.
We are excited to make placing models into FastScore simple. Watch a step-by-step demo of our new Jupyter integration with Matthew Mahowald, Product Manager/Data Scientist.
A simple restful API with Jupyter allows you to verify how models are behaving before they are uploaded into FastScore engines. Prepare and upload models, validate data schemas, identify potential production failures and errors, leverage your full data science stack, including libraries like Pandas and data.table, as well as validate, score and gain feedback. Watch and get answers to our most frequently asked questions, and more.
- What languages does the Jupyter platform support for FastScore?
- Can I check and validate my models before uploading them to FastScore engines?
- How can I ensure my model deploys before I hand it to the production team?
Jupyter integration allows the data science team maximum flexibility to design their models in familiar environments while simultaneously ensuring they are ready for deployment. With Jupyter and FastScore you can test locally, and deploy globally.
In our first post in a series of video blogs, listen in as George from our engineering staff takes Brooke from our customer team through a demo of FastScore and creates an Analytic Operation Center. In the demo, you will see two gradient boosting machine models deployed and scored in real time. Both model instances are deployed in FastScore, then the two model inputs and outputs are combined in a dashboard using Grafana - and we can start to monitor the analytics scoring as well as some key performance metrics of the deployment. Watch as they discuss several interesting concepts including:
- How can you quickly change models in production from Python to R?
- What happens to the compute resources when I change model languages?
- How can I leverage more analytic engines to increase scoring rates?
- Are there differences in running models in Azure vs AWS?
Centralized deployment, iteration and monitoring of analytics enables an Analytic Operation Center for the business - a single place to understand, manage and extract value from the data science investment.