Merge pull request #11558 from viotemp1/optuna

switch hyperopt from scikit-optimize to  Optuna
This commit is contained in:
Matthias
2025-05-09 09:53:13 +02:00
committed by GitHub
15 changed files with 356 additions and 251 deletions

View File

@@ -161,56 +161,53 @@ class MyAwesomeStrategy(IStrategy):
### Overriding Base estimator ### Overriding Base estimator
You can define your own estimator for Hyperopt by implementing `generate_estimator()` in the Hyperopt subclass. You can define your own optuna sampler for Hyperopt by implementing `generate_estimator()` in the Hyperopt subclass.
```python ```python
class MyAwesomeStrategy(IStrategy): class MyAwesomeStrategy(IStrategy):
class HyperOpt: class HyperOpt:
def generate_estimator(dimensions: List['Dimension'], **kwargs): def generate_estimator(dimensions: List['Dimension'], **kwargs):
return "RF" return "NSGAIIISampler"
``` ```
Possible values are either one of "GP", "RF", "ET", "GBRT" (Details can be found in the [scikit-optimize documentation](https://scikit-optimize.github.io/)), or "an instance of a class that inherits from `RegressorMixin` (from sklearn) and where the `predict` method has an optional `return_std` argument, which returns `std(Y | x)` along with `E[Y | x]`". Possible values are either one of "NSGAIISampler", "TPESampler", "GPSampler", "CmaEsSampler", "NSGAIIISampler", "QMCSampler" (Details can be found in the [optuna-samplers documentation](https://optuna.readthedocs.io/en/stable/reference/samplers/index.html)), or "an instance of a class that inherits from `optuna.samplers.BaseSampler`".
Some research will be necessary to find additional Regressors. Some research will be necessary to find additional Samplers (from optunahub) for example.
Example for `ExtraTreesRegressor` ("ET") with additional parameters:
```python
class MyAwesomeStrategy(IStrategy):
class HyperOpt:
def generate_estimator(dimensions: List['Dimension'], **kwargs):
from skopt.learning import ExtraTreesRegressor
# Corresponds to "ET" - but allows additional parameters.
return ExtraTreesRegressor(n_estimators=100)
```
The `dimensions` parameter is the list of `skopt.space.Dimension` objects corresponding to the parameters to be optimized. It can be used to create isotropic kernels for the `skopt.learning.GaussianProcessRegressor` estimator. Here's an example:
```python
class MyAwesomeStrategy(IStrategy):
class HyperOpt:
def generate_estimator(dimensions: List['Dimension'], **kwargs):
from skopt.utils import cook_estimator
from skopt.learning.gaussian_process.kernels import (Matern, ConstantKernel)
kernel_bounds = (0.0001, 10000)
kernel = (
ConstantKernel(1.0, kernel_bounds) *
Matern(length_scale=np.ones(len(dimensions)), length_scale_bounds=[kernel_bounds for d in dimensions], nu=2.5)
)
kernel += (
ConstantKernel(1.0, kernel_bounds) *
Matern(length_scale=np.ones(len(dimensions)), length_scale_bounds=[kernel_bounds for d in dimensions], nu=1.5)
)
return cook_estimator("GP", space=dimensions, kernel=kernel, n_restarts_optimizer=2)
```
!!! Note !!! Note
While custom estimators can be provided, it's up to you as User to do research on possible parameters and analyze / understand which ones should be used. While custom estimators can be provided, it's up to you as User to do research on possible parameters and analyze / understand which ones should be used.
If you're unsure about this, best use one of the Defaults (`"ET"` has proven to be the most versatile) without further parameters. If you're unsure about this, best use one of the Defaults (`"NSGAIIISampler"` has proven to be the most versatile) without further parameters.
??? Example "Using `AutoSampler` from Optunahub"
[AutoSampler docs](https://hub.optuna.org/samplers/auto_sampler/)
Install the necessary dependencies
``` bash
pip install optunahub cmaes torch scipy
```
Implement `generate_estimator()` in your strategy
``` python
# ...
from freqtrade.strategy.interface import IStrategy
from typing import List
import optunahub
# ...
class my_strategy(IStrategy):
class HyperOpt:
def generate_estimator(dimensions: List["Dimension"], **kwargs):
if "random_state" in kwargs.keys():
return optunahub.load_module("samplers/auto_sampler").AutoSampler(seed=kwargs["random_state"])
else:
return optunahub.load_module("samplers/auto_sampler").AutoSampler()
```
Obviously the same approach will work for all other Samplers optuna supports.
## Space options ## Space options

View File

@@ -219,10 +219,7 @@ On Windows, the `--logfile` option is also supported by Freqtrade and you can us
First of all, most indicator libraries don't have GPU support - as such, there would be little benefit for indicator calculations. First of all, most indicator libraries don't have GPU support - as such, there would be little benefit for indicator calculations.
The GPU improvements would only apply to pandas-native calculations - or ones written by yourself. The GPU improvements would only apply to pandas-native calculations - or ones written by yourself.
For hyperopt, freqtrade is using scikit-optimize, which is built on top of scikit-learn. GPU's are only good at crunching numbers (floating point operations).
Their statement about GPU support is [pretty clear](https://scikit-learn.org/stable/faq.html#will-you-add-gpu-support).
GPU's also are only good at crunching numbers (floating point operations).
For hyperopt, we need both number-crunching (find next parameters) and running python code (running backtesting). For hyperopt, we need both number-crunching (find next parameters) and running python code (running backtesting).
As such, GPU's are not too well suited for most parts of hyperopt. As such, GPU's are not too well suited for most parts of hyperopt.

View File

@@ -1,10 +1,10 @@
# Hyperopt # Hyperopt
This page explains how to tune your strategy by finding the optimal This page explains how to tune your strategy by finding the optimal
parameters, a process called hyperparameter optimization. The bot uses algorithms included in the `scikit-optimize` package to accomplish this. parameters, a process called hyperparameter optimization. The bot uses algorithms included in the `optuna` package to accomplish this.
The search will burn all your CPU cores, make your laptop sound like a fighter jet and still take a long time. The search will burn all your CPU cores, make your laptop sound like a fighter jet and still take a long time.
In general, the search for best parameters starts with a few random combinations (see [below](#reproducible-results) for more details) and then uses Bayesian search with a ML regressor algorithm (currently ExtraTreesRegressor) to quickly find a combination of parameters in the search hyperspace that minimizes the value of the [loss function](#loss-functions). In general, the search for best parameters starts with a few random combinations (see [below](#reproducible-results) for more details) and then uses one of optuna's sampler algorithms (currently NSGAIIISampler) to quickly find a combination of parameters in the search hyperspace that minimizes the value of the [loss function](#loss-functions).
Hyperopt requires historic data to be available, just as backtesting does (hyperopt runs backtesting many times with different parameters). Hyperopt requires historic data to be available, just as backtesting does (hyperopt runs backtesting many times with different parameters).
To learn how to get data for the pairs and exchange you're interested in, head over to the [Data Downloading](data-download.md) section of the documentation. To learn how to get data for the pairs and exchange you're interested in, head over to the [Data Downloading](data-download.md) section of the documentation.

View File

@@ -4,6 +4,7 @@
This module contains the hyperopt logic This module contains the hyperopt logic
""" """
import gc
import logging import logging
import random import random
from datetime import datetime from datetime import datetime
@@ -13,7 +14,7 @@ from pathlib import Path
from typing import Any from typing import Any
import rapidjson import rapidjson
from joblib import Parallel, cpu_count, delayed, wrap_non_picklable_objects from joblib import Parallel, cpu_count
from freqtrade.constants import FTHYPT_FILEVERSION, LAST_BT_RESULT_FN, Config from freqtrade.constants import FTHYPT_FILEVERSION, LAST_BT_RESULT_FN, Config
from freqtrade.enums import HyperoptState from freqtrade.enums import HyperoptState
@@ -35,9 +36,6 @@ logger = logging.getLogger(__name__)
INITIAL_POINTS = 30 INITIAL_POINTS = 30
# Keep no more than SKOPT_MODEL_QUEUE_SIZE models
# in the skopt model queue, to optimize memory consumption
SKOPT_MODEL_QUEUE_SIZE = 10
log_queue: Any log_queue: Any
@@ -92,7 +90,7 @@ class Hyperopt:
self.hyperopt_table_header = 0 self.hyperopt_table_header = 0
self.print_json = self.config.get("print_json", False) self.print_json = self.config.get("print_json", False)
self.hyperopter = HyperOptimizer(self.config) self.hyperopter = HyperOptimizer(self.config, self.data_pickle_file)
@staticmethod @staticmethod
def get_lock_filename(config: Config) -> str: def get_lock_filename(config: Config) -> str:
@@ -158,14 +156,20 @@ class Hyperopt:
log_queue, logging.INFO if self.config["verbosity"] < 1 else logging.DEBUG log_queue, logging.INFO if self.config["verbosity"] < 1 else logging.DEBUG
) )
return self.hyperopter.generate_optimizer(*args, **kwargs) return self.hyperopter.generate_optimizer_wrapped(*args, **kwargs)
return parallel(delayed(wrap_non_picklable_objects(optimizer_wrapper))(v) for v in asked) return parallel(optimizer_wrapper(v) for v in asked)
def _set_random_state(self, random_state: int | None) -> int: def _set_random_state(self, random_state: int | None) -> int:
return random_state or random.randint(1, 2**16 - 1) # noqa: S311 return random_state or random.randint(1, 2**16 - 1) # noqa: S311
def get_asked_points(self, n_points: int) -> tuple[list[list[Any]], list[bool]]: def get_optuna_asked_points(self, n_points: int, dimensions: dict) -> list[Any]:
asked: list[list[Any]] = []
for i in range(n_points):
asked.append(self.opt.ask(dimensions))
return asked
def get_asked_points(self, n_points: int, dimensions: dict) -> tuple[list[Any], list[bool]]:
""" """
Enforce points returned from `self.opt.ask` have not been already evaluated Enforce points returned from `self.opt.ask` have not been already evaluated
@@ -191,19 +195,19 @@ class Hyperopt:
while i < 5 and len(asked_non_tried) < n_points: while i < 5 and len(asked_non_tried) < n_points:
if i < 3: if i < 3:
self.opt.cache_ = {} self.opt.cache_ = {}
asked = unique_list(self.opt.ask(n_points=n_points * 5 if i > 0 else n_points)) asked = unique_list(
self.get_optuna_asked_points(
n_points=n_points * 5 if i > 0 else n_points, dimensions=dimensions
)
)
is_random = [False for _ in range(len(asked))] is_random = [False for _ in range(len(asked))]
else: else:
asked = unique_list(self.opt.space.rvs(n_samples=n_points * 5)) asked = unique_list(self.opt.space.rvs(n_samples=n_points * 5))
is_random = [True for _ in range(len(asked))] is_random = [True for _ in range(len(asked))]
is_random_non_tried += [ is_random_non_tried += [
rand rand for x, rand in zip(asked, is_random, strict=False) if x not in asked_non_tried
for x, rand in zip(asked, is_random, strict=False)
if x not in self.opt.Xi and x not in asked_non_tried
]
asked_non_tried += [
x for x in asked if x not in self.opt.Xi and x not in asked_non_tried
] ]
asked_non_tried += [x for x in asked if x not in asked_non_tried]
i += 1 i += 1
if asked_non_tried: if asked_non_tried:
@@ -212,7 +216,9 @@ class Hyperopt:
is_random_non_tried[: min(len(asked_non_tried), n_points)], is_random_non_tried[: min(len(asked_non_tried), n_points)],
) )
else: else:
return self.opt.ask(n_points=n_points), [False for _ in range(n_points)] return self.get_optuna_asked_points(n_points=n_points, dimensions=dimensions), [
False for _ in range(n_points)
]
def evaluate_result(self, val: dict[str, Any], current: int, is_random: bool): def evaluate_result(self, val: dict[str, Any], current: int, is_random: bool):
""" """
@@ -258,9 +264,7 @@ class Hyperopt:
config_jobs = self.config.get("hyperopt_jobs", -1) config_jobs = self.config.get("hyperopt_jobs", -1)
logger.info(f"Number of parallel jobs set as: {config_jobs}") logger.info(f"Number of parallel jobs set as: {config_jobs}")
self.opt = self.hyperopter.get_optimizer( self.opt = self.hyperopter.get_optimizer(self.random_state)
config_jobs, self.random_state, INITIAL_POINTS, SKOPT_MODEL_QUEUE_SIZE
)
self._setup_logging_mp_workaround() self._setup_logging_mp_workaround()
try: try:
with Parallel(n_jobs=config_jobs) as parallel: with Parallel(n_jobs=config_jobs) as parallel:
@@ -276,9 +280,11 @@ class Hyperopt:
if self.analyze_per_epoch: if self.analyze_per_epoch:
# First analysis not in parallel mode when using --analyze-per-epoch. # First analysis not in parallel mode when using --analyze-per-epoch.
# This allows dataprovider to load it's informative cache. # This allows dataprovider to load it's informative cache.
asked, is_random = self.get_asked_points(n_points=1) asked, is_random = self.get_asked_points(
f_val0 = self.hyperopter.generate_optimizer(asked[0]) n_points=1, dimensions=self.hyperopter.o_dimensions
self.opt.tell(asked, [f_val0["loss"]]) )
f_val0 = self.hyperopter.generate_optimizer(asked[0].params)
self.opt.tell(asked[0], [f_val0["loss"]])
self.evaluate_result(f_val0, 1, is_random[0]) self.evaluate_result(f_val0, 1, is_random[0])
pbar.update(task, advance=1) pbar.update(task, advance=1)
start += 1 start += 1
@@ -290,9 +296,17 @@ class Hyperopt:
n_rest = (i + 1) * jobs - (self.total_epochs - start) n_rest = (i + 1) * jobs - (self.total_epochs - start)
current_jobs = jobs - n_rest if n_rest > 0 else jobs current_jobs = jobs - n_rest if n_rest > 0 else jobs
asked, is_random = self.get_asked_points(n_points=current_jobs) asked, is_random = self.get_asked_points(
f_val = self.run_optimizer_parallel(parallel, asked) n_points=current_jobs, dimensions=self.hyperopter.o_dimensions
self.opt.tell(asked, [v["loss"] for v in f_val]) )
f_val = self.run_optimizer_parallel(
parallel,
[asked1.params for asked1 in asked],
)
f_val_loss = [v["loss"] for v in f_val]
for o_ask, v in zip(asked, f_val_loss, strict=False):
self.opt.tell(o_ask, v)
for j, val in enumerate(f_val): for j, val in enumerate(f_val):
# Use human-friendly indexes here (starting from 1) # Use human-friendly indexes here (starting from 1)
@@ -301,6 +315,7 @@ class Hyperopt:
self.evaluate_result(val, current, is_random[j]) self.evaluate_result(val, current, is_random[j])
pbar.update(task, advance=1) pbar.update(task, advance=1)
logging_mp_handle(log_queue) logging_mp_handle(log_queue)
gc.collect()
except KeyboardInterrupt: except KeyboardInterrupt:
print("User interrupted..") print("User interrupted..")

View File

@@ -12,7 +12,7 @@ from freqtrade.exceptions import OperationalException
with suppress(ImportError): with suppress(ImportError):
from skopt.space import Dimension from freqtrade.optimize.space import Dimension
from freqtrade.optimize.hyperopt.hyperopt_interface import EstimatorType, IHyperOpt from freqtrade.optimize.hyperopt.hyperopt_interface import EstimatorType, IHyperOpt

View File

@@ -8,19 +8,18 @@ import math
from abc import ABC from abc import ABC
from typing import TypeAlias from typing import TypeAlias
from sklearn.base import RegressorMixin from optuna.samplers import BaseSampler
from skopt.space import Categorical, Dimension, Integer
from freqtrade.constants import Config from freqtrade.constants import Config
from freqtrade.exchange import timeframe_to_minutes from freqtrade.exchange import timeframe_to_minutes
from freqtrade.misc import round_dict from freqtrade.misc import round_dict
from freqtrade.optimize.space import SKDecimal from freqtrade.optimize.space import Categorical, Dimension, Integer, SKDecimal
from freqtrade.strategy import IStrategy from freqtrade.strategy import IStrategy
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
EstimatorType: TypeAlias = RegressorMixin | str EstimatorType: TypeAlias = BaseSampler | str
class IHyperOpt(ABC): class IHyperOpt(ABC):
@@ -44,10 +43,11 @@ class IHyperOpt(ABC):
def generate_estimator(self, dimensions: list[Dimension], **kwargs) -> EstimatorType: def generate_estimator(self, dimensions: list[Dimension], **kwargs) -> EstimatorType:
""" """
Return base_estimator. Return base_estimator.
Can be any of "GP", "RF", "ET", "GBRT" or an instance of a class Can be any of "TPESampler", "GPSampler", "CmaEsSampler", "NSGAIISampler"
inheriting from RegressorMixin (from sklearn). "NSGAIIISampler", "QMCSampler" or an instance of a class
inheriting from BaseSampler (from optuna.samplers).
""" """
return "ET" return "NSGAIIISampler"
def generate_roi_table(self, params: dict) -> dict[int, float]: def generate_roi_table(self, params: dict) -> dict[int, float]:
""" """

View File

@@ -7,10 +7,13 @@ import logging
import sys import sys
import warnings import warnings
from datetime import datetime, timezone from datetime import datetime, timezone
from pathlib import Path
from typing import Any from typing import Any
from joblib import dump, load import optuna
from joblib import delayed, dump, load, wrap_non_picklable_objects
from joblib.externals import cloudpickle from joblib.externals import cloudpickle
from optuna.exceptions import ExperimentalWarning
from pandas import DataFrame from pandas import DataFrame
from freqtrade.constants import DATETIME_PRINT_FORMAT, Config from freqtrade.constants import DATETIME_PRINT_FORMAT, Config
@@ -20,7 +23,7 @@ from freqtrade.data.metrics import calculate_market_change
from freqtrade.enums import HyperoptState from freqtrade.enums import HyperoptState
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
from freqtrade.ft_types import BacktestContentType from freqtrade.ft_types import BacktestContentType
from freqtrade.misc import deep_merge_dicts from freqtrade.misc import deep_merge_dicts, round_dict
from freqtrade.optimize.backtesting import Backtesting from freqtrade.optimize.backtesting import Backtesting
# Import IHyperOptLoss to allow unpickling classes from these modules # Import IHyperOptLoss to allow unpickling classes from these modules
@@ -28,21 +31,31 @@ from freqtrade.optimize.hyperopt.hyperopt_auto import HyperOptAuto
from freqtrade.optimize.hyperopt_loss.hyperopt_loss_interface import IHyperOptLoss from freqtrade.optimize.hyperopt_loss.hyperopt_loss_interface import IHyperOptLoss
from freqtrade.optimize.hyperopt_tools import HyperoptStateContainer, HyperoptTools from freqtrade.optimize.hyperopt_tools import HyperoptStateContainer, HyperoptTools
from freqtrade.optimize.optimize_reports import generate_strategy_stats from freqtrade.optimize.optimize_reports import generate_strategy_stats
from freqtrade.optimize.space import (
DimensionProtocol,
SKDecimal,
ft_CategoricalDistribution,
ft_FloatDistribution,
ft_IntDistribution,
)
from freqtrade.resolvers.hyperopt_resolver import HyperOptLossResolver from freqtrade.resolvers.hyperopt_resolver import HyperOptLossResolver
from freqtrade.util.dry_run_wallet import get_dry_run_wallet from freqtrade.util.dry_run_wallet import get_dry_run_wallet
# Suppress scikit-learn FutureWarnings from skopt
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=FutureWarning)
from skopt import Optimizer
from skopt.space import Dimension
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
MAX_LOSS = 100000 # just a big enough number to be bad result in loss optimization MAX_LOSS = 100000 # just a big enough number to be bad result in loss optimization
optuna_samplers_dict = {
"TPESampler": optuna.samplers.TPESampler,
"GPSampler": optuna.samplers.GPSampler,
"CmaEsSampler": optuna.samplers.CmaEsSampler,
"NSGAIISampler": optuna.samplers.NSGAIISampler,
"NSGAIIISampler": optuna.samplers.NSGAIIISampler,
"QMCSampler": optuna.samplers.QMCSampler,
}
class HyperOptimizer: class HyperOptimizer:
""" """
@@ -50,15 +63,16 @@ class HyperOptimizer:
This class is sent to the hyperopt worker processes. This class is sent to the hyperopt worker processes.
""" """
def __init__(self, config: Config) -> None: def __init__(self, config: Config, data_pickle_file: Path) -> None:
self.buy_space: list[Dimension] = [] self.buy_space: list[DimensionProtocol] = []
self.sell_space: list[Dimension] = [] self.sell_space: list[DimensionProtocol] = []
self.protection_space: list[Dimension] = [] self.protection_space: list[DimensionProtocol] = []
self.roi_space: list[Dimension] = [] self.roi_space: list[DimensionProtocol] = []
self.stoploss_space: list[Dimension] = [] self.stoploss_space: list[DimensionProtocol] = []
self.trailing_space: list[Dimension] = [] self.trailing_space: list[DimensionProtocol] = []
self.max_open_trades_space: list[Dimension] = [] self.max_open_trades_space: list[DimensionProtocol] = []
self.dimensions: list[Dimension] = [] self.dimensions: list[DimensionProtocol] = []
self.o_dimensions: dict = {}
self.config = config self.config = config
self.min_date: datetime self.min_date: datetime
@@ -86,9 +100,7 @@ class HyperOptimizer:
) )
self.calculate_loss = self.custom_hyperoptloss.hyperopt_loss_function self.calculate_loss = self.custom_hyperoptloss.hyperopt_loss_function
self.data_pickle_file = ( self.data_pickle_file = data_pickle_file
self.config["user_data_dir"] / "hyperopt_results" / "hyperopt_tickerdata.pkl"
)
self.market_change = 0.0 self.market_change = 0.0
@@ -127,18 +139,6 @@ class HyperOptimizer:
cloudpickle.register_pickle_by_value(mod) cloudpickle.register_pickle_by_value(mod)
self.hyperopt_pickle_magic(modules.__bases__) self.hyperopt_pickle_magic(modules.__bases__)
def _get_params_dict(
self, dimensions: list[Dimension], raw_params: list[Any]
) -> dict[str, Any]:
# Ensure the number of dimensions match
# the number of parameters in the list.
if len(raw_params) != len(dimensions):
raise ValueError("Mismatch in number of search-space dimensions.")
# Return a dict where the keys are the names of the dimensions
# and the values are taken from the list of parameters.
return {d.name: v for d, v in zip(dimensions, raw_params, strict=False)}
def _get_params_details(self, params: dict) -> dict: def _get_params_details(self, params: dict) -> dict:
""" """
Return the params for each space Return the params for each space
@@ -146,27 +146,36 @@ class HyperOptimizer:
result: dict = {} result: dict = {}
if HyperoptTools.has_space(self.config, "buy"): if HyperoptTools.has_space(self.config, "buy"):
result["buy"] = {p.name: params.get(p.name) for p in self.buy_space} result["buy"] = round_dict({p.name: params.get(p.name) for p in self.buy_space}, 13)
if HyperoptTools.has_space(self.config, "sell"): if HyperoptTools.has_space(self.config, "sell"):
result["sell"] = {p.name: params.get(p.name) for p in self.sell_space} result["sell"] = round_dict({p.name: params.get(p.name) for p in self.sell_space}, 13)
if HyperoptTools.has_space(self.config, "protection"): if HyperoptTools.has_space(self.config, "protection"):
result["protection"] = {p.name: params.get(p.name) for p in self.protection_space} result["protection"] = round_dict(
{p.name: params.get(p.name) for p in self.protection_space}, 13
)
if HyperoptTools.has_space(self.config, "roi"): if HyperoptTools.has_space(self.config, "roi"):
result["roi"] = { result["roi"] = round_dict(
str(k): v for k, v in self.custom_hyperopt.generate_roi_table(params).items() {str(k): v for k, v in self.custom_hyperopt.generate_roi_table(params).items()}, 13
} )
if HyperoptTools.has_space(self.config, "stoploss"): if HyperoptTools.has_space(self.config, "stoploss"):
result["stoploss"] = {p.name: params.get(p.name) for p in self.stoploss_space} result["stoploss"] = round_dict(
{p.name: params.get(p.name) for p in self.stoploss_space}, 13
)
if HyperoptTools.has_space(self.config, "trailing"): if HyperoptTools.has_space(self.config, "trailing"):
result["trailing"] = self.custom_hyperopt.generate_trailing_params(params) result["trailing"] = round_dict(
self.custom_hyperopt.generate_trailing_params(params), 13
)
if HyperoptTools.has_space(self.config, "trades"): if HyperoptTools.has_space(self.config, "trades"):
result["max_open_trades"] = { result["max_open_trades"] = round_dict(
"max_open_trades": ( {
self.backtesting.strategy.max_open_trades "max_open_trades": (
if self.backtesting.strategy.max_open_trades != float("inf") self.backtesting.strategy.max_open_trades
else -1 if self.backtesting.strategy.max_open_trades != float("inf")
) else -1
} )
},
13,
)
return result return result
@@ -246,7 +255,12 @@ class HyperOptimizer:
# noinspection PyProtectedMember # noinspection PyProtectedMember
attr.value = params_dict[attr_name] attr.value = params_dict[attr_name]
def generate_optimizer(self, raw_params: list[Any]) -> dict[str, Any]: @delayed
@wrap_non_picklable_objects
def generate_optimizer_wrapped(self, params_dict: dict[str, Any]) -> dict[str, Any]:
return self.generate_optimizer(params_dict)
def generate_optimizer(self, params_dict: dict[str, Any]) -> dict[str, Any]:
""" """
Used Optimize function. Used Optimize function.
Called once per epoch to optimize whatever is configured. Called once per epoch to optimize whatever is configured.
@@ -254,7 +268,6 @@ class HyperOptimizer:
""" """
HyperoptStateContainer.set_state(HyperoptState.OPTIMIZE) HyperoptStateContainer.set_state(HyperoptState.OPTIMIZE)
backtest_start_time = datetime.now(timezone.utc) backtest_start_time = datetime.now(timezone.utc)
params_dict = self._get_params_dict(self.dimensions, raw_params)
# Apply parameters # Apply parameters
if HyperoptTools.has_space(self.config, "buy"): if HyperoptTools.has_space(self.config, "buy"):
@@ -304,9 +317,9 @@ class HyperOptimizer:
with self.data_pickle_file.open("rb") as f: with self.data_pickle_file.open("rb") as f:
processed = load(f, mmap_mode="r") processed = load(f, mmap_mode="r")
if self.analyze_per_epoch: if self.analyze_per_epoch:
# Data is not yet analyzed, rerun populate_indicators. # Data is not yet analyzed, rerun populate_indicators.
processed = self.advise_and_trim(processed) processed = self.advise_and_trim(processed)
bt_results = self.backtesting.backtest( bt_results = self.backtesting.backtest(
processed=processed, start_date=self.min_date, end_date=self.max_date processed=processed, start_date=self.min_date, end_date=self.max_date
@@ -318,10 +331,10 @@ class HyperOptimizer:
"backtest_end_time": int(backtest_end_time.timestamp()), "backtest_end_time": int(backtest_end_time.timestamp()),
} }
) )
result = self._get_results_dict(
return self._get_results_dict(
bt_results, self.min_date, self.max_date, params_dict, processed=processed bt_results, self.min_date, self.max_date, params_dict, processed=processed
) )
return result
def _get_results_dict( def _get_results_dict(
self, self,
@@ -378,33 +391,41 @@ class HyperOptimizer:
"total_profit": total_profit, "total_profit": total_profit,
} }
def convert_dimensions_to_optuna_space(self, s_dimensions: list[DimensionProtocol]) -> dict:
o_dimensions: dict[str, optuna.distributions.BaseDistribution] = {}
for original_dim in s_dimensions:
if isinstance(
original_dim,
ft_CategoricalDistribution | ft_IntDistribution | ft_FloatDistribution | SKDecimal,
):
o_dimensions[original_dim.name] = original_dim
else:
raise OperationalException(
f"Unknown search space {original_dim.name} - {original_dim} / \
{type(original_dim)}"
)
return o_dimensions
def get_optimizer( def get_optimizer(
self, self,
cpu_count: int,
random_state: int, random_state: int,
initial_points: int, ):
model_queue_size: int, o_sampler = self.custom_hyperopt.generate_estimator(
) -> Optimizer: dimensions=self.dimensions, random_state=random_state
dimensions = self.dimensions
estimator = self.custom_hyperopt.generate_estimator(dimensions=dimensions)
acq_optimizer = "sampling"
if isinstance(estimator, str):
if estimator not in ("GP", "RF", "ET", "GBRT"):
raise OperationalException(f"Estimator {estimator} not supported.")
else:
acq_optimizer = "auto"
logger.info(f"Using estimator {estimator}.")
return Optimizer(
dimensions,
base_estimator=estimator,
acq_optimizer=acq_optimizer,
n_initial_points=initial_points,
acq_optimizer_kwargs={"n_jobs": cpu_count},
random_state=random_state,
model_queue_size=model_queue_size,
) )
self.o_dimensions = self.convert_dimensions_to_optuna_space(self.dimensions)
if isinstance(o_sampler, str):
if o_sampler not in optuna_samplers_dict.keys():
raise OperationalException(f"Optuna Sampler {o_sampler} not supported.")
with warnings.catch_warnings():
warnings.filterwarnings(action="ignore", category=ExperimentalWarning)
sampler = optuna_samplers_dict[o_sampler](seed=random_state)
else:
sampler = o_sampler
logger.info(f"Using optuna sampler {o_sampler}.")
return optuna.create_study(sampler=sampler, direction="minimize")
def advise_and_trim(self, data: dict[str, DataFrame]) -> dict[str, DataFrame]: def advise_and_trim(self, data: dict[str, DataFrame]) -> dict[str, DataFrame]:
preprocessed = self.backtesting.strategy.advise_all_indicators(data) preprocessed = self.backtesting.strategy.advise_all_indicators(data)

View File

@@ -1,3 +1,16 @@
from skopt.space import Categorical, Dimension, Integer, Real # noqa: F401 from .decimalspace import SKDecimal
from .optunaspaces import (
DimensionProtocol,
ft_CategoricalDistribution,
ft_FloatDistribution,
ft_IntDistribution,
)
from .decimalspace import SKDecimal # noqa: F401
# Alias for the distribution classes
Dimension = DimensionProtocol
Categorical = ft_CategoricalDistribution
Integer = ft_IntDistribution
Real = ft_FloatDistribution
__all__ = ["Categorical", "Dimension", "Integer", "Real", "SKDecimal"]

View File

@@ -1,47 +1,35 @@
import numpy as np from optuna.distributions import FloatDistribution
from skopt.space import Integer
class SKDecimal(Integer): class SKDecimal(FloatDistribution):
def __init__( def __init__(
self, self,
low, low: float,
high, high: float,
decimals=3, *,
prior="uniform", step: float | None = None,
base=10, decimals: int | None = None,
transform=None,
name=None, name=None,
dtype=np.int64,
): ):
self.decimals = decimals """
FloatDistribution with a fixed step size.
Only one of step or decimals can be set.
:param low: lower bound
:param high: upper bound
:param step: step size (e.g. 0.001)
:param decimals: number of decimal places to round to (e.g. 3)
:param name: name of the distribution
"""
if decimals is not None and step is not None:
raise ValueError("You can only set one of decimals or step")
if decimals is None and step is None:
raise ValueError("You must set one of decimals or step")
# Convert decimals to step
self.step = step or (1 / 10**decimals if decimals else 1)
self.name = name
self.pow_dot_one = pow(0.1, self.decimals) super().__init__(
self.pow_ten = pow(10, self.decimals) low=round(low, decimals) if decimals else low,
high=round(high, decimals) if decimals else high,
_low = int(low * self.pow_ten) step=self.step,
_high = int(high * self.pow_ten)
# trunc to precision to avoid points out of space
self.low_orig = round(_low * self.pow_dot_one, self.decimals)
self.high_orig = round(_high * self.pow_dot_one, self.decimals)
super().__init__(_low, _high, prior, base, transform, name, dtype)
def __repr__(self):
return (
f"Decimal(low={self.low_orig}, high={self.high_orig}, decimals={self.decimals}, "
f"prior='{self.prior}', transform='{self.transform_}')"
) )
def __contains__(self, point):
if isinstance(point, list):
point = np.array(point)
return self.low_orig <= point <= self.high_orig
def transform(self, Xt):
return super().transform([int(v * self.pow_ten) for v in Xt])
def inverse_transform(self, Xt):
res = super().inverse_transform(Xt)
# equivalent to [round(x * pow(0.1, self.decimals), self.decimals) for x in res]
return [int(v) / self.pow_ten for v in res]

View File

@@ -0,0 +1,59 @@
from collections.abc import Sequence
from typing import Any, Protocol
from optuna.distributions import CategoricalDistribution, FloatDistribution, IntDistribution
class DimensionProtocol(Protocol):
name: str
class ft_CategoricalDistribution(CategoricalDistribution):
def __init__(
self,
categories: Sequence[Any],
name: str,
**kwargs,
):
self.name = name
self.categories = categories
# if len(categories) <= 1:
# raise Exception(f"need at least 2 categories for {name}")
return super().__init__(categories)
def __repr__(self):
return f"CategoricalDistribution({self.categories})"
class ft_IntDistribution(IntDistribution):
def __init__(
self,
low: int | float,
high: int | float,
name: str,
**kwargs,
):
self.name = name
self.low = int(low)
self.high = int(high)
return super().__init__(self.low, self.high, **kwargs)
def __repr__(self):
return f"IntDistribution(low={self.low}, high={self.high})"
class ft_FloatDistribution(FloatDistribution):
def __init__(
self,
low: float,
high: float,
name: str,
**kwargs,
):
self.name = name
self.low = low
self.high = high
return super().__init__(low, high, **kwargs)
def __repr__(self):
return f"FloatDistribution(low={self.low}, high={self.high}, step={self.step})"

View File

@@ -14,9 +14,12 @@ from freqtrade.optimize.hyperopt_tools import HyperoptStateContainer
with suppress(ImportError): with suppress(ImportError):
from skopt.space import Categorical, Integer, Real from freqtrade.optimize.space import (
Categorical,
from freqtrade.optimize.space import SKDecimal Integer,
Real,
SKDecimal,
)
from freqtrade.exceptions import OperationalException from freqtrade.exceptions import OperationalException
@@ -51,7 +54,8 @@ class BaseParameter(ABC):
name is prefixed with 'buy_' or 'sell_'. name is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.(Integer|Real|Categorical). :param kwargs: Extra parameters to optuna.distributions.
(IntDistribution|FloatDistribution|CategoricalDistribution).
""" """
if "name" in kwargs: if "name" in kwargs:
raise OperationalException( raise OperationalException(
@@ -109,7 +113,7 @@ class NumericParameter(BaseParameter):
parameter fieldname is prefixed with 'buy_' or 'sell_'. parameter fieldname is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.*. :param kwargs: Extra parameters to optuna.distributions.*.
""" """
if high is not None and isinstance(low, Sequence): if high is not None and isinstance(low, Sequence):
raise OperationalException(f"{self.__class__.__name__} space invalid.") raise OperationalException(f"{self.__class__.__name__} space invalid.")
@@ -151,7 +155,7 @@ class IntParameter(NumericParameter):
parameter fieldname is prefixed with 'buy_' or 'sell_'. parameter fieldname is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.Integer. :param kwargs: Extra parameters to optuna.distributions.IntDistribution.
""" """
super().__init__( super().__init__(
@@ -160,7 +164,7 @@ class IntParameter(NumericParameter):
def get_space(self, name: str) -> "Integer": def get_space(self, name: str) -> "Integer":
""" """
Create skopt optimization space. Create optuna distribution space.
:param name: A name of parameter field. :param name: A name of parameter field.
""" """
return Integer(low=self.low, high=self.high, name=name, **self._space_params) return Integer(low=self.low, high=self.high, name=name, **self._space_params)
@@ -174,7 +178,7 @@ class IntParameter(NumericParameter):
calculating 100ds of indicators. calculating 100ds of indicators.
""" """
if self.can_optimize(): if self.can_optimize():
# Scikit-optimize ranges are "inclusive", while python's "range" is exclusive # optuna distributions ranges are "inclusive", while python's "range" is exclusive
return range(self.low, self.high + 1) return range(self.low, self.high + 1)
else: else:
return range(self.value, self.value + 1) return range(self.value, self.value + 1)
@@ -205,7 +209,7 @@ class RealParameter(NumericParameter):
parameter fieldname is prefixed with 'buy_' or 'sell_'. parameter fieldname is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.Real. :param kwargs: Extra parameters to optuna.distributions.FloatDistribution.
""" """
super().__init__( super().__init__(
low=low, high=high, default=default, space=space, optimize=optimize, load=load, **kwargs low=low, high=high, default=default, space=space, optimize=optimize, load=load, **kwargs
@@ -213,7 +217,7 @@ class RealParameter(NumericParameter):
def get_space(self, name: str) -> "Real": def get_space(self, name: str) -> "Real":
""" """
Create skopt optimization space. Create optimization space.
:param name: A name of parameter field. :param name: A name of parameter field.
""" """
return Real(low=self.low, high=self.high, name=name, **self._space_params) return Real(low=self.low, high=self.high, name=name, **self._space_params)
@@ -246,7 +250,7 @@ class DecimalParameter(NumericParameter):
parameter fieldname is prefixed with 'buy_' or 'sell_'. parameter fieldname is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.Integer. :param kwargs: Extra parameters to optuna's NumericParameter.
""" """
self._decimals = decimals self._decimals = decimals
default = round(default, self._decimals) default = round(default, self._decimals)
@@ -257,7 +261,7 @@ class DecimalParameter(NumericParameter):
def get_space(self, name: str) -> "SKDecimal": def get_space(self, name: str) -> "SKDecimal":
""" """
Create skopt optimization space. Create optimization space.
:param name: A name of parameter field. :param name: A name of parameter field.
""" """
return SKDecimal( return SKDecimal(
@@ -305,7 +309,8 @@ class CategoricalParameter(BaseParameter):
name is prefixed with 'buy_' or 'sell_'. name is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.Categorical. :param kwargs: Compatibility. Optuna's CategoricalDistribution does not
accept extra parameters
""" """
if len(categories) < 2: if len(categories) < 2:
raise OperationalException( raise OperationalException(
@@ -316,10 +321,10 @@ class CategoricalParameter(BaseParameter):
def get_space(self, name: str) -> "Categorical": def get_space(self, name: str) -> "Categorical":
""" """
Create skopt optimization space. Create optuna distribution space.
:param name: A name of parameter field. :param name: A name of parameter field.
""" """
return Categorical(self.opt_range, name=name, **self._space_params) return Categorical(self.opt_range, name=name)
@property @property
def range(self): def range(self):
@@ -355,7 +360,7 @@ class BooleanParameter(CategoricalParameter):
name is prefixed with 'buy_' or 'sell_'. name is prefixed with 'buy_' or 'sell_'.
:param optimize: Include parameter in hyperopt optimizations. :param optimize: Include parameter in hyperopt optimizations.
:param load: Load parameter value from {space}_params. :param load: Load parameter value from {space}_params.
:param kwargs: Extra parameters to skopt.space.Categorical. :param kwargs: Extra parameters to optuna.distributions.CategoricalDistribution.
""" """
categories = [True, False] categories = [True, False]

View File

@@ -79,7 +79,8 @@ plot = ["plotly>=4.0"]
hyperopt = [ hyperopt = [
"scipy", "scipy",
"scikit-learn", "scikit-learn",
"ft-scikit-optimize>=0.9.2", "optuna > 4.0.0",
"cmaes",
"filelock", "filelock",
] ]
freqai = [ freqai = [

View File

@@ -4,5 +4,6 @@
# Required for hyperopt # Required for hyperopt
scipy==1.15.2 scipy==1.15.2
scikit-learn==1.6.1 scikit-learn==1.6.1
ft-scikit-optimize==0.9.2
filelock==3.18.0 filelock==3.18.0
optuna==4.2.1
cmaes==0.11.1

View File

@@ -1,13 +1,12 @@
# pragma pylint: disable=missing-docstring,W0212,C0103 # pragma pylint: disable=missing-docstring,W0212,C0103
from datetime import datetime, timedelta from datetime import datetime, timedelta
from functools import wraps from functools import partial, wraps
from pathlib import Path from pathlib import Path
from unittest.mock import ANY, MagicMock, PropertyMock from unittest.mock import ANY, MagicMock, PropertyMock
import pandas as pd import pandas as pd
import pytest import pytest
from filelock import Timeout from filelock import Timeout
from skopt.space import Integer
from freqtrade.commands.optimize_commands import setup_optimize_configuration, start_hyperopt from freqtrade.commands.optimize_commands import setup_optimize_configuration, start_hyperopt
from freqtrade.data.history import load_data from freqtrade.data.history import load_data
@@ -17,7 +16,7 @@ from freqtrade.optimize.hyperopt import Hyperopt
from freqtrade.optimize.hyperopt.hyperopt_auto import HyperOptAuto from freqtrade.optimize.hyperopt.hyperopt_auto import HyperOptAuto
from freqtrade.optimize.hyperopt_tools import HyperoptTools from freqtrade.optimize.hyperopt_tools import HyperoptTools
from freqtrade.optimize.optimize_reports import generate_strategy_stats from freqtrade.optimize.optimize_reports import generate_strategy_stats
from freqtrade.optimize.space import SKDecimal from freqtrade.optimize.space import SKDecimal, ft_IntDistribution
from freqtrade.strategy import IntParameter from freqtrade.strategy import IntParameter
from freqtrade.util import dt_utc from freqtrade.util import dt_utc
from tests.conftest import ( from tests.conftest import (
@@ -578,7 +577,7 @@ def test_generate_optimizer(mocker, hyperopt_conf) -> None:
"buy_plusdi": 0.02, "buy_plusdi": 0.02,
"buy_rsi": 35, "buy_rsi": 35,
}, },
"roi": {"0": 0.12000000000000001, "20.0": 0.02, "50.0": 0.01, "110.0": 0}, "roi": {"0": 0.12, "20.0": 0.02, "50.0": 0.01, "110.0": 0},
"protection": { "protection": {
"protection_cooldown_lookback": 20, "protection_cooldown_lookback": 20,
"protection_enabled": True, "protection_enabled": True,
@@ -606,9 +605,7 @@ def test_generate_optimizer(mocker, hyperopt_conf) -> None:
hyperopt.hyperopter.min_date = dt_utc(2017, 12, 10) hyperopt.hyperopter.min_date = dt_utc(2017, 12, 10)
hyperopt.hyperopter.max_date = dt_utc(2017, 12, 13) hyperopt.hyperopter.max_date = dt_utc(2017, 12, 13)
hyperopt.hyperopter.init_spaces() hyperopt.hyperopter.init_spaces()
generate_optimizer_value = hyperopt.hyperopter.generate_optimizer( generate_optimizer_value = hyperopt.hyperopter.generate_optimizer(optimizer_param)
list(optimizer_param.values())
)
assert generate_optimizer_value == response_expected assert generate_optimizer_value == response_expected
@@ -1088,8 +1085,8 @@ def test_in_strategy_auto_hyperopt(mocker, hyperopt_conf, tmp_path, fee) -> None
assert opt.backtesting.strategy.max_open_trades != 1 assert opt.backtesting.strategy.max_open_trades != 1
opt.custom_hyperopt.generate_estimator = lambda *args, **kwargs: "ET1" opt.custom_hyperopt.generate_estimator = lambda *args, **kwargs: "ET1"
with pytest.raises(OperationalException, match="Estimator ET1 not supported."): with pytest.raises(OperationalException, match="Optuna Sampler ET1 not supported."):
opt.get_optimizer(2, 42, 2, 2) opt.get_optimizer(42)
@pytest.mark.filterwarnings("ignore::DeprecationWarning") @pytest.mark.filterwarnings("ignore::DeprecationWarning")
@@ -1186,19 +1183,27 @@ def test_in_strategy_auto_hyperopt_per_epoch(mocker, hyperopt_conf, tmp_path, fe
def test_SKDecimal(): def test_SKDecimal():
space = SKDecimal(1, 2, decimals=2) space = SKDecimal(1, 2, decimals=2)
assert 1.5 in space assert space._contains(1.5)
assert 2.5 not in space assert not space._contains(2.5)
assert space.low == 100 assert space.low == 1
assert space.high == 200 assert space.high == 2
assert space.inverse_transform([200]) == [2.0] assert space._contains(1.51)
assert space.inverse_transform([100]) == [1.0] assert space._contains(1.01)
assert space.inverse_transform([150, 160]) == [1.5, 1.6] # Falls out of the space with 2 decimals
assert not space._contains(1.511)
assert not space._contains(1.111222)
assert space.transform([1.5]) == [150] with pytest.raises(ValueError):
assert space.transform([2.0]) == [200] SKDecimal(1, 2, step=5, decimals=0.2)
assert space.transform([1.0]) == [100]
assert space.transform([1.5, 1.6]) == [150, 160] with pytest.raises(ValueError):
SKDecimal(1, 2, step=None, decimals=None)
s = SKDecimal(1, 2, step=0.1, decimals=None)
assert s.step == 0.1
assert s._contains(1.1)
assert not s._contains(1.11)
def test_stake_amount_unlimited_max_open_trades(mocker, hyperopt_conf, tmp_path, fee) -> None: def test_stake_amount_unlimited_max_open_trades(mocker, hyperopt_conf, tmp_path, fee) -> None:
@@ -1217,10 +1222,6 @@ def test_stake_amount_unlimited_max_open_trades(mocker, hyperopt_conf, tmp_path,
} }
) )
hyperopt = Hyperopt(hyperopt_conf) hyperopt = Hyperopt(hyperopt_conf)
mocker.patch(
"freqtrade.optimize.hyperopt.hyperopt_optimizer.HyperOptimizer._get_params_dict",
return_value={"max_open_trades": -1},
)
assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto) assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto)
@@ -1228,7 +1229,7 @@ def test_stake_amount_unlimited_max_open_trades(mocker, hyperopt_conf, tmp_path,
hyperopt.start() hyperopt.start()
assert hyperopt.hyperopter.backtesting.strategy.max_open_trades == 1 assert hyperopt.hyperopter.backtesting.strategy.max_open_trades == 3
def test_max_open_trades_dump(mocker, hyperopt_conf, tmp_path, fee, capsys) -> None: def test_max_open_trades_dump(mocker, hyperopt_conf, tmp_path, fee, capsys) -> None:
@@ -1246,9 +1247,15 @@ def test_max_open_trades_dump(mocker, hyperopt_conf, tmp_path, fee, capsys) -> N
} }
) )
hyperopt = Hyperopt(hyperopt_conf) hyperopt = Hyperopt(hyperopt_conf)
def optuna_mock(hyperopt, *args, **kwargs):
a = hyperopt.get_optuna_asked_points(*args, **kwargs)
a[0]._cached_frozen_trial.params["max_open_trades"] = -1
return a, [True]
mocker.patch( mocker.patch(
"freqtrade.optimize.hyperopt.hyperopt_optimizer.HyperOptimizer._get_params_dict", "freqtrade.optimize.hyperopt.Hyperopt.get_asked_points",
return_value={"max_open_trades": -1}, side_effect=partial(optuna_mock, hyperopt),
) )
assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto) assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto)
@@ -1266,8 +1273,8 @@ def test_max_open_trades_dump(mocker, hyperopt_conf, tmp_path, fee, capsys) -> N
hyperopt = Hyperopt(hyperopt_conf) hyperopt = Hyperopt(hyperopt_conf)
mocker.patch( mocker.patch(
"freqtrade.optimize.hyperopt.hyperopt_optimizer.HyperOptimizer._get_params_dict", "freqtrade.optimize.hyperopt.Hyperopt.get_asked_points",
return_value={"max_open_trades": -1}, side_effect=partial(optuna_mock, hyperopt),
) )
assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto) assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto)
@@ -1304,7 +1311,7 @@ def test_max_open_trades_consistency(mocker, hyperopt_conf, tmp_path, fee) -> No
assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto) assert isinstance(hyperopt.hyperopter.custom_hyperopt, HyperOptAuto)
hyperopt.hyperopter.custom_hyperopt.max_open_trades_space = lambda: [ hyperopt.hyperopter.custom_hyperopt.max_open_trades_space = lambda: [
Integer(1, 10, name="max_open_trades") ft_IntDistribution(1, 10, "max_open_trades")
] ]
first_time_evaluated = False first_time_evaluated = False
@@ -1313,9 +1320,10 @@ def test_max_open_trades_consistency(mocker, hyperopt_conf, tmp_path, fee) -> No
@wraps(func) @wraps(func)
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
nonlocal first_time_evaluated nonlocal first_time_evaluated
stake_amount = func(*args, **kwargs) stake_amount = func(*args, **kwargs)
if first_time_evaluated is False: if first_time_evaluated is False:
assert stake_amount == 1 assert stake_amount == 2
first_time_evaluated = True first_time_evaluated = True
return stake_amount return stake_amount
@@ -1329,5 +1337,5 @@ def test_max_open_trades_consistency(mocker, hyperopt_conf, tmp_path, fee) -> No
hyperopt.start() hyperopt.start()
assert hyperopt.hyperopter.backtesting.strategy.max_open_trades == 8 assert hyperopt.hyperopter.backtesting.strategy.max_open_trades == 4
assert hyperopt.config["max_open_trades"] == 8 assert hyperopt.config["max_open_trades"] == 4

View File

@@ -895,7 +895,7 @@ def test_is_informative_pairs_callback(default_conf):
def test_hyperopt_parameters(): def test_hyperopt_parameters():
HyperoptStateContainer.set_state(HyperoptState.INDICATORS) HyperoptStateContainer.set_state(HyperoptState.INDICATORS)
from skopt.space import Categorical, Integer, Real from optuna.distributions import CategoricalDistribution, FloatDistribution, IntDistribution
with pytest.raises(OperationalException, match=r"Name is determined.*"): with pytest.raises(OperationalException, match=r"Name is determined.*"):
IntParameter(low=0, high=5, default=1, name="hello") IntParameter(low=0, high=5, default=1, name="hello")
@@ -926,7 +926,7 @@ def test_hyperopt_parameters():
intpar = IntParameter(low=0, high=5, default=1, space="buy") intpar = IntParameter(low=0, high=5, default=1, space="buy")
assert intpar.value == 1 assert intpar.value == 1
assert isinstance(intpar.get_space(""), Integer) assert isinstance(intpar.get_space(""), IntDistribution)
assert isinstance(intpar.range, range) assert isinstance(intpar.range, range)
assert len(list(intpar.range)) == 1 assert len(list(intpar.range)) == 1
# Range contains ONLY the default / value. # Range contains ONLY the default / value.
@@ -938,7 +938,7 @@ def test_hyperopt_parameters():
fltpar = RealParameter(low=0.0, high=5.5, default=1.0, space="buy") fltpar = RealParameter(low=0.0, high=5.5, default=1.0, space="buy")
assert fltpar.value == 1 assert fltpar.value == 1
assert isinstance(fltpar.get_space(""), Real) assert isinstance(fltpar.get_space(""), FloatDistribution)
fltpar = DecimalParameter(low=0.0, high=0.5, default=0.14, decimals=1, space="buy") fltpar = DecimalParameter(low=0.0, high=0.5, default=0.14, decimals=1, space="buy")
assert fltpar.value == 0.1 assert fltpar.value == 0.1
@@ -955,7 +955,7 @@ def test_hyperopt_parameters():
["buy_rsi", "buy_macd", "buy_none"], default="buy_macd", space="buy" ["buy_rsi", "buy_macd", "buy_none"], default="buy_macd", space="buy"
) )
assert catpar.value == "buy_macd" assert catpar.value == "buy_macd"
assert isinstance(catpar.get_space(""), Categorical) assert isinstance(catpar.get_space(""), CategoricalDistribution)
assert isinstance(catpar.range, list) assert isinstance(catpar.range, list)
assert len(list(catpar.range)) == 1 assert len(list(catpar.range)) == 1
# Range contains ONLY the default / value. # Range contains ONLY the default / value.
@@ -966,7 +966,7 @@ def test_hyperopt_parameters():
boolpar = BooleanParameter(default=True, space="buy") boolpar = BooleanParameter(default=True, space="buy")
assert boolpar.value is True assert boolpar.value is True
assert isinstance(boolpar.get_space(""), Categorical) assert isinstance(boolpar.get_space(""), CategoricalDistribution)
assert isinstance(boolpar.range, list) assert isinstance(boolpar.range, list)
assert len(list(boolpar.range)) == 1 assert len(list(boolpar.range)) == 1