13
Hello, my name is Scott Clark, co-founder and CEO of SigOpt. In this video I’m going to show you how SigOpt can help you amplify your trading models by optimally tuning them using our black-box optimization platform.

SigOpt for Hedge Funds

  • Upload
    sigopt

  • View
    147

  • Download
    0

Embed Size (px)

Citation preview

OPTIMIZE YOURTRADING MODELS

Hello, my name is Scott Clark, co-founder and CEO of SigOpt.

In this video I’m going to show you how SigOpt can help you amplify your trading models by optimally tuning them using our black-box optimization platform.

© 2017 SigOpt, Inc sigopt.com

OPTIMIZATION AS A SERVICE

SigOpt optimizes...

● Trading Strategies● Complex Models● Simulations / Backtests● Machine Learning and AI

Resulting in...

● Better Results● Faster Development● Cheaper, Faster

Tuning

The SigOpt platform provides an ensemble of state-of-the-art Bayesian and Global optimization algorithms via a simple Software-as-a-Service API.

SigOpt optimizes trading and machine learning models with less trial and error by efficiently configuring the tunable parameters of these models.

This results in captured performance that may otherwise be left on the table by conventional techniques while also reducing the time and cost for developing and optimizing new models.

© 2017 SigOpt, Inc sigopt.com

Photo: Joe Ross

Every complex system has tunable parameters.

A car has parameters like the gear ratio or fuel injection ratio that affect output like top speed.

Machine learning models have tunable hyperparameters that affect output like accuracy or error metrics.

Algorithmic trading models have parameters like thresholds, weights, and data transformations that affect output like a backtested performance metric.

© 2017 SigOpt, Inc sigopt.com

STANDARD TUNING METHODS

Trading Models

Data Backtest / Simulation

ParameterConfiguration

?Grid Search Random Search

Manual Search

- Weights- Thresholds- Window sizes- Transformations

Domain Expertise

Domain expertise is incredibly important when developing algorithmic trading models, which often need to undergo rigorous backtesting before being deployed into production.

Often the expert needs to tune the model they developed to optimize a performance metric and maximize returns.

This involves finding the best parameter configurations for all the various knobs and levers within the model and can have an significant impact on the end results.

Traditionally this is a very expensive, trial and error based process that relies on methods like grid, random, local, or an expert-intensive manual search.

© 2017 SigOpt, Inc sigopt.com

OPTIMIZATION FEEDBACK LOOP

Objective Metric

Better Results

REST API

New configurationsTrading Models

Data Backtest / Simulation

Domain Expertise

SigOpt uses a proven, peer-reviewed ensemble of Bayesian and Global Optimization algorithms to efficiently tune these models.

First, SigOpt suggests parameter configurations to evaluate, which are then evaluated on some backtest or simulation where an objective metric like a Sharpe Ratio is calculated. This process is repeated, either in parallel or serially.

SigOpt’s ensemble of optimization methods leverages the historical performance of previous configurations to optimally suggest new parameter configurations to evaluate.

By efficiently trading off exploration (learning more information about the underlying parameters and response surface) and exploitation (leveraging that information to optimize the output metric), SigOpt is able to find better configurations exponentially faster than standard methods like an exhaustive grid search.

All of this is accomplished by bolting our easy-to-integrate REST API onto your existing models.

SigOpt’s black-box optimization algorithms require only high-level information about the parameters being tuned and how they performed, meaning sensitive information about your data and model stays private and secure.

© 2017 SigOpt, Inc sigopt.com

COMPARATIVE PERFORMANCE

Grid Search

ManualSearch

● Better: 200% Higher model returns than manual search

● Faster/Cheaper: 10x fewer evaluationsvs standard methods

Back

test

Por

tfol

io V

alue

Time (2004-2012)Blog Post

This results in dramatically improved performance over standard methods as seen in this joint work with Quantopian.

In this case, SigOpt was able to tune a simple trading strategy and beat both the standard method of exhaustive grid search and an expert performing a time consuming and expensive manual search for a local optima.

SigOpt found a model that produced 200% higher backtested returns than the expertly tuned configuration, while also requiring 10x fewer evaluations than the standard grid search approach.

Blog Post: https://blog.quantopian.com/bayesian-optimization-of-a-technical-trading-algorithm-with-ziplinesigopt-2/

© 2017 SigOpt, Inc sigopt.com

USE CASE: ML MODELS

ML / AIModel

TestingData

CrossValidation

Accuracy

Better Results

REST API

HyperparameterConfigurationsTraining

Data

Because SigOpt is a black-box optimization platform it is agnostic to the underlying model being tuned and can be readily used to tune any Machine Learning or AI pipeline as well.

All SigOpt requires is continuous, integer, or categorical parameters to tune, like the hyperparameters of a machine learning pipeline, as well as a performance metric to optimize, like cross validated accuracy.

SigOpt makes no assumptions about the underlying parameters or metric being optimized. It can even be a composite of many underlying metrics and does not need to be convex, continuous, differentiable, or even defined for all configurations.

© 2017 SigOpt, Inc sigopt.com

TUNABLE PARAMETERS IN DEEP LEARNING

Every machine learning or AI model has tunable hyperparameters that affect performance.

This can be as simple as the number of trees in a random forest or the kernel of a Support Vector Machine or as complex as the learning rate in a gradient boosted or deep learning method.

In this simple TensorFlow example, we have constructed a 4 layer network to perform 2D, binary classification. We are attempting to learn a surface that can differentiate blue and orange dots as seen in the figure to the right.

Even this simple task and small network has 22 tunable hyperparameters including traditional hyperparameters like learning rate and activation function, as well as regularization and architecture parameters, and feature transformation parameters.

By tuning the parameters of this pipeline in unison we can achieve much better results than tuning them independently.

This extends to complex trading pipelines as well, which may incorporate many unsupervised and supervised learning techniques with tunable parameters in addition to the actual trading strategy itself.

© 2017 SigOpt, Inc sigopt.com

COMPARATIVE PERFORMANCE ● Better Results, Faster and Cheaper

Quickly get the most out of your models with our proven, peer-reviewed ensemble of Bayesian and Global Optimization Methods

○ A Stratified Analysis of Bayesian Optimization Methods (ICML 2016)○ Evaluation System for a Bayesian Optimization Service (ICML 2016)○ Interactive Preference Learning of Utility Functions for Multi-Objective Optimization (NIPS 2016)○ And more...

● Fully FeaturedTune any model in any pipeline

○ Scales to 100 continuous, integer, and categorical parameters and many thousands of evaluations○ Parallel tuning support across any number of models○ Simple integrations with many languages and libraries○ Powerful dashboards for introspecting your models and optimization○ Advanced features like multi-objective optimization, failure region support, and more

● Secure Black Box OptimizationYour data and models never leave your system

SigOpt provides best-in-class performance. We’ve successfully deployed our solution at firms worldwide and rigorously compare our methods to standard and open source alternatives at the top machine learning conferences.

Our platform scales to any problem and provides features like native parallelism, multi-objective optimization, and more.

Additionally, our black box optimization approach means that your proprietary data and models never leave your system, allowing you to leverage these powerful techniques on top of the infrastructure and tools you’ve already built.

Links:○ A Stratified Analysis of Bayesian Optimization Methods (ICML 2016)

■ https://arxiv.org/pdf/1603.09441v1.pdf○ Evaluation System for a Bayesian Optimization Service (ICML 2016)

■ https://arxiv.org/abs/1605.06170○ Interactive Preference Learning of Utility Functions for Multi-Objective

Optimization (NIPS 2016)■ https://arxiv.org/abs/1612.04453

○ And more…■ https://sigopt.com/research

© 2017 SigOpt, Inc sigopt.com

SIMPLIFIED OPTIMIZATIONClient Libraries● Python● Java● R● Matlab● And more...

Framework Integrations● TensorFlow● Scikit-learn● xgboost● Keras● Neon● And more...

Live Demo

The SigOpt optimization platform integrates with any technology stack and the intuitive dashboards shine a light on the otherwise opaque world of parameter tuning.

Just plug our API in, tune your models, and your whole team benefits from the history, transparency, and analysis in the platform.

Documentation: https://sigopt.com/docsIntegrations: https://github.com/sigoptLive Demo: https://sigopt.com/getstarted

© 2017 SigOpt, Inc sigopt.com

DISTRIBUTED TRAINING

● SigOpt serves as a distributed scheduler for training models across workers

● Workers access the SigOpt API for the latest parameters to try for each model

● Enables easy distributed training of non-distributed algorithms across any number of models

SigOpt also allows you to tune any algorithm in parallel by acting as a distributed scheduler for parameter tuning.

This allows you to tune traditionally serial models in parallel, and achieve better results faster than otherwise possible, while also scaling across any number of independent models.

© 2017 SigOpt, Inc sigopt.com

SIGOPT CUSTOMERS

SigOpt has successfully engaged with globally recognized leaders in insurance, credit card, algorithmic trading and consumer packaged goods industries. Use cases include:

● Trading Strategies● Complex Models● Simulations / Backtests● Machine Learning and AI

Select Customers

SigOpt has been deployed successfully at some of the largest and most sophisticated firms and universities in the world.

We’ve helped tune everything from algorithmic trading strategies to machine learning and AI pipelines and beyond.

© 2017 SigOpt, Inc sigopt.com

Contact us to set up an evaluation today

[email protected]

Contact us to set up an evaluation and unleash the power of Bayesian and Global Optimization on your models today.

Email: [email protected]

© 2017 SigOpt, Inc sigopt.com