mirror of
https://github.com/freqtrade/freqtrade.git
synced 2026-02-21 11:51:05 +00:00
resolve conflict, ensure gpu works with transformer
This commit is contained in:
@@ -29,7 +29,7 @@ If all goes well, you should now see a `backtest-result-{timestamp}_signals.pkl`
|
||||
`user_data/backtest_results` folder.
|
||||
|
||||
To analyze the entry/exit tags, we now need to use the `freqtrade backtesting-analysis` command
|
||||
with `--analysis-groups` option provided with space-separated arguments (default `0 1 2`):
|
||||
with `--analysis-groups` option provided with space-separated arguments:
|
||||
|
||||
``` bash
|
||||
freqtrade backtesting-analysis -c <config.json> --analysis-groups 0 1 2 3 4 5
|
||||
@@ -39,6 +39,7 @@ This command will read from the last backtesting results. The `--analysis-groups
|
||||
used to specify the various tabular outputs showing the profit fo each group or trade,
|
||||
ranging from the simplest (0) to the most detailed per pair, per buy and per sell tag (4):
|
||||
|
||||
* 0: overall winrate and profit summary by enter_tag
|
||||
* 1: profit summaries grouped by enter_tag
|
||||
* 2: profit summaries grouped by enter_tag and exit_tag
|
||||
* 3: profit summaries grouped by pair and enter_tag
|
||||
@@ -115,3 +116,38 @@ For example, if your backtest timerange was `20220101-20221231` but you only wan
|
||||
```bash
|
||||
freqtrade backtesting-analysis -c <config.json> --timerange 20220101-20220201
|
||||
```
|
||||
|
||||
### Printing out rejected signals
|
||||
|
||||
Use the `--rejected-signals` option to print out rejected signals.
|
||||
|
||||
```bash
|
||||
freqtrade backtesting-analysis -c <config.json> --rejected-signals
|
||||
```
|
||||
|
||||
### Writing tables to CSV
|
||||
|
||||
Some of the tabular outputs can become large, so printing them out to the terminal is not preferable.
|
||||
Use the `--analysis-to-csv` option to disable printing out of tables to standard out and write them to CSV files.
|
||||
|
||||
```bash
|
||||
freqtrade backtesting-analysis -c <config.json> --analysis-to-csv
|
||||
```
|
||||
|
||||
By default this will write one file per output table you specified in the `backtesting-analysis` command, e.g.
|
||||
|
||||
```bash
|
||||
freqtrade backtesting-analysis -c <config.json> --analysis-to-csv --rejected-signals --analysis-groups 0 1
|
||||
```
|
||||
|
||||
This will write to `user_data/backtest_results`:
|
||||
|
||||
* rejected_signals.csv
|
||||
* group_0.csv
|
||||
* group_1.csv
|
||||
|
||||
To override where the files will be written, also specify the `--analysis-csv-path` option.
|
||||
|
||||
```bash
|
||||
freqtrade backtesting-analysis -c <config.json> --analysis-to-csv --analysis-csv-path another/data/path/
|
||||
```
|
||||
|
||||
@@ -397,3 +397,21 @@ Here we create a `PyTorchMLPRegressor` class that implements the `fit` method. T
|
||||
return dataframe
|
||||
```
|
||||
To see a full example, you can refer to the [classifier test strategy class](https://github.com/freqtrade/freqtrade/blob/develop/tests/strategy/strats/freqai_test_classifier.py).
|
||||
|
||||
|
||||
#### Improving performance with `torch.compile()`
|
||||
|
||||
Torch provides a `torch.compile()` method that can be used to improve performance for specific GPU hardware. More details can be found [here](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html). In brief, you simply wrap your `model` in `torch.compile()`:
|
||||
|
||||
|
||||
```python
|
||||
model = PyTorchMLPModel(
|
||||
input_dim=n_features,
|
||||
output_dim=1,
|
||||
**self.model_kwargs
|
||||
)
|
||||
model.to(self.device)
|
||||
model = torch.compile(model)
|
||||
```
|
||||
|
||||
Then proceed to use the model as normal. Keep in mind that doing this will remove eager execution, which means errors and tracebacks will not be informative.
|
||||
|
||||
@@ -18,9 +18,10 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
|
||||
| `purge_old_models` | Number of models to keep on disk (not relevant to backtesting). Default is 2, which means that dry/live runs will keep the latest 2 models on disk. Setting to 0 keeps all models. This parameter also accepts a boolean to maintain backwards compatibility. <br> **Datatype:** Integer. <br> Default: `2`.
|
||||
| `save_backtest_models` | Save models to disk when running backtesting. Backtesting operates most efficiently by saving the prediction data and reusing them directly for subsequent runs (when you wish to tune entry/exit parameters). Saving backtesting models to disk also allows to use the same model files for starting a dry/live instance with the same model `identifier`. <br> **Datatype:** Boolean. <br> Default: `False` (no models are saved).
|
||||
| `fit_live_predictions_candles` | Number of historical candles to use for computing target (label) statistics from prediction data, instead of from the training dataset (more information can be found [here](freqai-configuration.md#creating-a-dynamic-target-threshold)). <br> **Datatype:** Positive integer.
|
||||
| `continual_learning` | Use the final state of the most recently trained model as starting point for the new model, allowing for incremental learning (more information can be found [here](freqai-running.md#continual-learning)). <br> **Datatype:** Boolean. <br> Default: `False`.
|
||||
| `continual_learning` | Use the final state of the most recently trained model as starting point for the new model, allowing for incremental learning (more information can be found [here](freqai-running.md#continual-learning)). Beware that this is currently a naive approach to incremental learning, and it has a high probability of overfitting/getting stuck in local minima while the market moves away from your model. We have the connections here primarily for experimental purposes and so that it is ready for more mature approaches to continual learning in chaotic systems like the crypto market. <br> **Datatype:** Boolean. <br> Default: `False`.
|
||||
| `write_metrics_to_disk` | Collect train timings, inference timings and cpu usage in json file. <br> **Datatype:** Boolean. <br> Default: `False`
|
||||
| `data_kitchen_thread_count` | <br> Designate the number of threads you want to use for data processing (outlier methods, normalization, etc.). This has no impact on the number of threads used for training. If user does not set it (default), FreqAI will use max number of threads - 2 (leaving 1 physical core available for Freqtrade bot and FreqUI) <br> **Datatype:** Positive integer.
|
||||
| `activate_tensorboard` | <br> Indicate whether or not to activate tensorboard for the tensorboard enabled modules (currently Reinforcment Learning, XGBoost, Catboost, and PyTorch). Tensorboard needs Torch installed, which means you will need the torch/RL docker image or you need to answer "yes" to the install question about whether or not you wish to install Torch. <br> **Datatype:** Boolean. <br> Default: `True`.
|
||||
|
||||
### Feature parameters
|
||||
|
||||
@@ -114,5 +115,5 @@ Mandatory parameters are marked as **Required** and have to be set in one of the
|
||||
|------------|-------------|
|
||||
| | **Extraneous parameters**
|
||||
| `freqai.keras` | If the selected model makes use of Keras (typical for TensorFlow-based prediction models), this flag needs to be activated so that the model save/loading follows Keras standards. <br> **Datatype:** Boolean. <br> Default: `False`.
|
||||
| `freqai.conv_width` | The width of a convolutional neural network input tensor. This replaces the need for shifting candles (`include_shifted_candles`) by feeding in historical data points as the second dimension of the tensor. Technically, this parameter can also be used for regressors, but it only adds computational overhead and does not change the model training/prediction. <br> **Datatype:** Integer. <br> Default: `2`.
|
||||
| `freqai.conv_width` | The width of a neural network input tensor. This replaces the need for shifting candles (`include_shifted_candles`) by feeding in historical data points as the second dimension of the tensor. Technically, this parameter can also be used for regressors, but it only adds computational overhead and does not change the model training/prediction. <br> **Datatype:** Integer. <br> Default: `2`.
|
||||
| `freqai.reduce_df_footprint` | Recast all numeric columns to float32/int32, with the objective of reducing ram/disk usage and decreasing train/inference timing. This parameter is set in the main level of the Freqtrade configuration file (not inside FreqAI). <br> **Datatype:** Boolean. <br> Default: `False`.
|
||||
|
||||
@@ -135,92 +135,104 @@ Parameter details can be found [here](freqai-parameter-table.md), but in general
|
||||
|
||||
## Creating a custom reward function
|
||||
|
||||
As you begin to modify the strategy and the prediction model, you will quickly realize some important differences between the Reinforcement Learner and the Regressors/Classifiers. Firstly, the strategy does not set a target value (no labels!). Instead, you set the `calculate_reward()` function inside the `MyRLEnv` class (see below). A default `calculate_reward()` is provided inside `prediction_models/ReinforcementLearner.py` to demonstrate the necessary building blocks for creating rewards, but users are encouraged to create their own custom reinforcement learning model class (see below) and save it to `user_data/freqaimodels`. It is inside the `calculate_reward()` where creative theories about the market can be expressed. For example, you can reward your agent when it makes a winning trade, and penalize the agent when it makes a losing trade. Or perhaps, you wish to reward the agent for entering trades, and penalize the agent for sitting in trades too long. Below we show examples of how these rewards are all calculated:
|
||||
!!! danger "Not for production"
|
||||
Warning!
|
||||
The reward function provided with the Freqtrade source code is a showcase of functionality designed to show/test as many possible environment control features as possible. It is also designed to run quickly on small computers. This is a benchmark, it is *not* for live production. Please beware that you will need to create your own custom_reward() function or use a template built by other users outside of the Freqtrade source code.
|
||||
|
||||
As you begin to modify the strategy and the prediction model, you will quickly realize some important differences between the Reinforcement Learner and the Regressors/Classifiers. Firstly, the strategy does not set a target value (no labels!). Instead, you set the `calculate_reward()` function inside the `MyRLEnv` class (see below). A default `calculate_reward()` is provided inside `prediction_models/ReinforcementLearner.py` to demonstrate the necessary building blocks for creating rewards, but this is *not* designed for production. Users *must* create their own custom reinforcement learning model class or use a pre-built one from outside the Freqtrade source code and save it to `user_data/freqaimodels`. It is inside the `calculate_reward()` where creative theories about the market can be expressed. For example, you can reward your agent when it makes a winning trade, and penalize the agent when it makes a losing trade. Or perhaps, you wish to reward the agent for entering trades, and penalize the agent for sitting in trades too long. Below we show examples of how these rewards are all calculated:
|
||||
|
||||
!!! note "Hint"
|
||||
The best reward functions are ones that are continuously differentiable, and well scaled. In other words, adding a single large negative penalty to a rare event is not a good idea, and the neural net will not be able to learn that function. Instead, it is better to add a small negative penalty to a common event. This will help the agent learn faster. Not only this, but you can help improve the continuity of your rewards/penalties by having them scale with severity according to some linear/exponential functions. In other words, you'd slowly scale the penalty as the duration of the trade increases. This is better than a single large penalty occuring at a single point in time.
|
||||
|
||||
```python
|
||||
from freqtrade.freqai.prediction_models.ReinforcementLearner import ReinforcementLearner
|
||||
from freqtrade.freqai.RL.Base5ActionRLEnv import Actions, Base5ActionRLEnv, Positions
|
||||
from freqtrade.freqai.prediction_models.ReinforcementLearner import ReinforcementLearner
|
||||
from freqtrade.freqai.RL.Base5ActionRLEnv import Actions, Base5ActionRLEnv, Positions
|
||||
|
||||
|
||||
class MyCoolRLModel(ReinforcementLearner):
|
||||
class MyCoolRLModel(ReinforcementLearner):
|
||||
"""
|
||||
User created RL prediction model.
|
||||
|
||||
Save this file to `freqtrade/user_data/freqaimodels`
|
||||
|
||||
then use it with:
|
||||
|
||||
freqtrade trade --freqaimodel MyCoolRLModel --config config.json --strategy SomeCoolStrat
|
||||
|
||||
Here the users can override any of the functions
|
||||
available in the `IFreqaiModel` inheritance tree. Most importantly for RL, this
|
||||
is where the user overrides `MyRLEnv` (see below), to define custom
|
||||
`calculate_reward()` function, or to override any other parts of the environment.
|
||||
|
||||
This class also allows users to override any other part of the IFreqaiModel tree.
|
||||
For example, the user can override `def fit()` or `def train()` or `def predict()`
|
||||
to take fine-tuned control over these processes.
|
||||
|
||||
Another common override may be `def data_cleaning_predict()` where the user can
|
||||
take fine-tuned control over the data handling pipeline.
|
||||
"""
|
||||
class MyRLEnv(Base5ActionRLEnv):
|
||||
"""
|
||||
User created RL prediction model.
|
||||
User made custom environment. This class inherits from BaseEnvironment and gym.env.
|
||||
Users can override any functions from those parent classes. Here is an example
|
||||
of a user customized `calculate_reward()` function.
|
||||
|
||||
Save this file to `freqtrade/user_data/freqaimodels`
|
||||
|
||||
then use it with:
|
||||
|
||||
freqtrade trade --freqaimodel MyCoolRLModel --config config.json --strategy SomeCoolStrat
|
||||
|
||||
Here the users can override any of the functions
|
||||
available in the `IFreqaiModel` inheritance tree. Most importantly for RL, this
|
||||
is where the user overrides `MyRLEnv` (see below), to define custom
|
||||
`calculate_reward()` function, or to override any other parts of the environment.
|
||||
|
||||
This class also allows users to override any other part of the IFreqaiModel tree.
|
||||
For example, the user can override `def fit()` or `def train()` or `def predict()`
|
||||
to take fine-tuned control over these processes.
|
||||
|
||||
Another common override may be `def data_cleaning_predict()` where the user can
|
||||
take fine-tuned control over the data handling pipeline.
|
||||
Warning!
|
||||
This is function is a showcase of functionality designed to show as many possible
|
||||
environment control features as possible. It is also designed to run quickly
|
||||
on small computers. This is a benchmark, it is *not* for live production.
|
||||
"""
|
||||
class MyRLEnv(Base5ActionRLEnv):
|
||||
"""
|
||||
User made custom environment. This class inherits from BaseEnvironment and gym.env.
|
||||
Users can override any functions from those parent classes. Here is an example
|
||||
of a user customized `calculate_reward()` function.
|
||||
"""
|
||||
def calculate_reward(self, action: int) -> float:
|
||||
# first, penalize if the action is not valid
|
||||
if not self._is_valid(action):
|
||||
return -2
|
||||
pnl = self.get_unrealized_profit()
|
||||
def calculate_reward(self, action: int) -> float:
|
||||
# first, penalize if the action is not valid
|
||||
if not self._is_valid(action):
|
||||
return -2
|
||||
pnl = self.get_unrealized_profit()
|
||||
|
||||
factor = 100
|
||||
factor = 100
|
||||
|
||||
pair = self.pair.replace(':', '')
|
||||
pair = self.pair.replace(':', '')
|
||||
|
||||
# you can use feature values from dataframe
|
||||
# Assumes the shifted RSI indicator has been generated in the strategy.
|
||||
rsi_now = self.raw_features[f"%-rsi-period_10_shift-1_{pair}_"
|
||||
f"{self.config['timeframe']}"].iloc[self._current_tick]
|
||||
# you can use feature values from dataframe
|
||||
# Assumes the shifted RSI indicator has been generated in the strategy.
|
||||
rsi_now = self.raw_features[f"%-rsi-period_10_shift-1_{pair}_"
|
||||
f"{self.config['timeframe']}"].iloc[self._current_tick]
|
||||
|
||||
# reward agent for entering trades
|
||||
if (action in (Actions.Long_enter.value, Actions.Short_enter.value)
|
||||
and self._position == Positions.Neutral):
|
||||
if rsi_now < 40:
|
||||
factor = 40 / rsi_now
|
||||
else:
|
||||
factor = 1
|
||||
return 25 * factor
|
||||
# reward agent for entering trades
|
||||
if (action in (Actions.Long_enter.value, Actions.Short_enter.value)
|
||||
and self._position == Positions.Neutral):
|
||||
if rsi_now < 40:
|
||||
factor = 40 / rsi_now
|
||||
else:
|
||||
factor = 1
|
||||
return 25 * factor
|
||||
|
||||
# discourage agent from not entering trades
|
||||
if action == Actions.Neutral.value and self._position == Positions.Neutral:
|
||||
return -1
|
||||
max_trade_duration = self.rl_config.get('max_trade_duration_candles', 300)
|
||||
trade_duration = self._current_tick - self._last_trade_tick
|
||||
if trade_duration <= max_trade_duration:
|
||||
factor *= 1.5
|
||||
elif trade_duration > max_trade_duration:
|
||||
factor *= 0.5
|
||||
# discourage sitting in position
|
||||
if self._position in (Positions.Short, Positions.Long) and \
|
||||
action == Actions.Neutral.value:
|
||||
return -1 * trade_duration / max_trade_duration
|
||||
# close long
|
||||
if action == Actions.Long_exit.value and self._position == Positions.Long:
|
||||
if pnl > self.profit_aim * self.rr:
|
||||
factor *= self.rl_config['model_reward_parameters'].get('win_reward_factor', 2)
|
||||
return float(pnl * factor)
|
||||
# close short
|
||||
if action == Actions.Short_exit.value and self._position == Positions.Short:
|
||||
if pnl > self.profit_aim * self.rr:
|
||||
factor *= self.rl_config['model_reward_parameters'].get('win_reward_factor', 2)
|
||||
return float(pnl * factor)
|
||||
return 0.
|
||||
# discourage agent from not entering trades
|
||||
if action == Actions.Neutral.value and self._position == Positions.Neutral:
|
||||
return -1
|
||||
max_trade_duration = self.rl_config.get('max_trade_duration_candles', 300)
|
||||
trade_duration = self._current_tick - self._last_trade_tick
|
||||
if trade_duration <= max_trade_duration:
|
||||
factor *= 1.5
|
||||
elif trade_duration > max_trade_duration:
|
||||
factor *= 0.5
|
||||
# discourage sitting in position
|
||||
if self._position in (Positions.Short, Positions.Long) and \
|
||||
action == Actions.Neutral.value:
|
||||
return -1 * trade_duration / max_trade_duration
|
||||
# close long
|
||||
if action == Actions.Long_exit.value and self._position == Positions.Long:
|
||||
if pnl > self.profit_aim * self.rr:
|
||||
factor *= self.rl_config['model_reward_parameters'].get('win_reward_factor', 2)
|
||||
return float(pnl * factor)
|
||||
# close short
|
||||
if action == Actions.Short_exit.value and self._position == Positions.Short:
|
||||
if pnl > self.profit_aim * self.rr:
|
||||
factor *= self.rl_config['model_reward_parameters'].get('win_reward_factor', 2)
|
||||
return float(pnl * factor)
|
||||
return 0.
|
||||
```
|
||||
|
||||
### Using Tensorboard
|
||||
## Using Tensorboard
|
||||
|
||||
Reinforcement Learning models benefit from tracking training metrics. FreqAI has integrated Tensorboard to allow users to track training and evaluation performance across all coins and across all retrainings. Tensorboard is activated via the following command:
|
||||
|
||||
@@ -233,32 +245,30 @@ where `unique-id` is the `identifier` set in the `freqai` configuration file. Th
|
||||
|
||||

|
||||
|
||||
|
||||
### Custom logging
|
||||
## Custom logging
|
||||
|
||||
FreqAI also provides a built in episodic summary logger called `self.tensorboard_log` for adding custom information to the Tensorboard log. By default, this function is already called once per step inside the environment to record the agent actions. All values accumulated for all steps in a single episode are reported at the conclusion of each episode, followed by a full reset of all metrics to 0 in preparation for the subsequent episode.
|
||||
|
||||
|
||||
`self.tensorboard_log` can also be used anywhere inside the environment, for example, it can be added to the `calculate_reward` function to collect more detailed information about how often various parts of the reward were called:
|
||||
|
||||
```py
|
||||
class MyRLEnv(Base5ActionRLEnv):
|
||||
"""
|
||||
User made custom environment. This class inherits from BaseEnvironment and gym.env.
|
||||
Users can override any functions from those parent classes. Here is an example
|
||||
of a user customized `calculate_reward()` function.
|
||||
"""
|
||||
def calculate_reward(self, action: int) -> float:
|
||||
if not self._is_valid(action):
|
||||
self.tensorboard_log("invalid")
|
||||
return -2
|
||||
```python
|
||||
class MyRLEnv(Base5ActionRLEnv):
|
||||
"""
|
||||
User made custom environment. This class inherits from BaseEnvironment and gym.env.
|
||||
Users can override any functions from those parent classes. Here is an example
|
||||
of a user customized `calculate_reward()` function.
|
||||
"""
|
||||
def calculate_reward(self, action: int) -> float:
|
||||
if not self._is_valid(action):
|
||||
self.tensorboard_log("invalid")
|
||||
return -2
|
||||
|
||||
```
|
||||
|
||||
!!! Note
|
||||
The `self.tensorboard_log()` function is designed for tracking incremented objects only i.e. events, actions inside the training environment. If the event of interest is a float, the float can be passed as the second argument e.g. `self.tensorboard_log("float_metric1", 0.23)`. In this case the metric values are not incremented.
|
||||
|
||||
### Choosing a base environment
|
||||
## Choosing a base environment
|
||||
|
||||
FreqAI provides three base environments, `Base3ActionRLEnvironment`, `Base4ActionEnvironment` and `Base5ActionEnvironment`. As the names imply, the environments are customized for agents that can select from 3, 4 or 5 actions. The `Base3ActionEnvironment` is the simplest, the agent can select from hold, long, or short. This environment can also be used for long-only bots (it automatically follows the `can_short` flag from the strategy), where long is the enter condition and short is the exit condition. Meanwhile, in the `Base4ActionEnvironment`, the agent can enter long, enter short, hold neutral, or exit position. Finally, in the `Base5ActionEnvironment`, the agent has the same actions as Base4, but instead of a single exit action, it separates exit long and exit short. The main changes stemming from the environment selection include:
|
||||
|
||||
|
||||
@@ -131,6 +131,9 @@ You can choose to adopt a continual learning scheme by setting `"continual_learn
|
||||
???+ danger "Continual learning enforces a constant parameter space"
|
||||
Since `continual_learning` means that the model parameter space *cannot* change between trainings, `principal_component_analysis` is automatically disabled when `continual_learning` is enabled. Hint: PCA changes the parameter space and the number of features, learn more about PCA [here](freqai-feature-engineering.md#data-dimensionality-reduction-with-principal-component-analysis).
|
||||
|
||||
???+ danger "Experimental functionality"
|
||||
Beware that this is currently a naive approach to incremental learning, and it has a high probability of overfitting/getting stuck in local minima while the market moves away from your model. We have the mechanics available in FreqAI primarily for experimental purposes and so that it is ready for more mature approaches to continual learning in chaotic systems like the crypto market.
|
||||
|
||||
## Hyperopt
|
||||
|
||||
You can hyperopt using the same command as for [typical Freqtrade hyperopt](hyperopt.md):
|
||||
@@ -158,7 +161,14 @@ This specific hyperopt would help you understand the appropriate `DI_values` for
|
||||
|
||||
## Using Tensorboard
|
||||
|
||||
CatBoost models benefit from tracking training metrics via Tensorboard. You can take advantage of the FreqAI integration to track training and evaluation performance across all coins and across all retrainings. Tensorboard is activated via the following command:
|
||||
!!! note "Availability"
|
||||
FreqAI includes tensorboard for a variety of models, including XGBoost, all PyTorch models, Reinforcement Learning, and Catboost. If you would like to see Tensorboard integrated into another model type, please open an issue on the [Freqtrade GitHub](https://github.com/freqtrade/freqtrade/issues)
|
||||
|
||||
!!! danger "Requirements"
|
||||
Tensorboard logging requires the FreqAI torch installation/docker image.
|
||||
|
||||
|
||||
The easiest way to use tensorboard is to ensure `freqai.activate_tensorboard` is set to `True` (default setting) in your configuration file, run FreqAI, then open a separate shell and run:
|
||||
|
||||
```bash
|
||||
cd freqtrade
|
||||
@@ -168,3 +178,7 @@ tensorboard --logdir user_data/models/unique-id
|
||||
where `unique-id` is the `identifier` set in the `freqai` configuration file. This command must be run in a separate shell if you wish to view the output in your browser at 127.0.0.1:6060 (6060 is the default port used by Tensorboard).
|
||||
|
||||

|
||||
|
||||
|
||||
!!! note "Deactivate for improved performance"
|
||||
Tensorboard logging can slow down training and should be deactivated for production use.
|
||||
|
||||
@@ -32,7 +32,10 @@ The easiest way to quickly test FreqAI is to run it in dry mode with the followi
|
||||
freqtrade trade --config config_examples/config_freqai.example.json --strategy FreqaiExampleStrategy --freqaimodel LightGBMRegressor --strategy-path freqtrade/templates
|
||||
```
|
||||
|
||||
You will see the boot-up process of automatic data downloading, followed by simultaneous training and trading.
|
||||
You will see the boot-up process of automatic data downloading, followed by simultaneous training and trading.
|
||||
|
||||
!!! danger "Not for production"
|
||||
The example strategy provided with the Freqtrade source code is designed for showcasing/testing a wide variety of FreqAI features. It is also designed to run on small computers so that it can be used as a benchmark between developers and users. It is *not* designed to be run in production.
|
||||
|
||||
An example strategy, prediction model, and config to use as a starting points can be found in
|
||||
`freqtrade/templates/FreqaiExampleStrategy.py`, `freqtrade/freqai/prediction_models/LightGBMRegressor.py`, and
|
||||
@@ -69,11 +72,7 @@ pip install -r requirements-freqai.txt
|
||||
```
|
||||
|
||||
!!! Note
|
||||
Catboost will not be installed on arm devices (raspberry, Mac M1, ARM based VPS, ...), since it does not provide wheels for this platform.
|
||||
|
||||
!!! Note "python 3.11"
|
||||
Some dependencies (Catboost, Torch) currently don't support python 3.11. Freqtrade therefore only supports python 3.10 for these models/dependencies.
|
||||
Tests involving these dependencies are skipped on 3.11.
|
||||
Catboost will not be installed on low-powered arm devices (raspberry), since it does not provide wheels for this platform.
|
||||
|
||||
### Usage with docker
|
||||
|
||||
|
||||
@@ -30,12 +30,6 @@ The easiest way to install and run Freqtrade is to clone the bot Github reposito
|
||||
!!! Warning "Up-to-date clock"
|
||||
The clock on the system running the bot must be accurate, synchronized to a NTP server frequently enough to avoid problems with communication to the exchanges.
|
||||
|
||||
!!! Error "Running setup.py install for gym did not run successfully."
|
||||
If you get an error related with gym we suggest you to downgrade setuptools it to version 65.5.0 you can do it with the following command:
|
||||
```bash
|
||||
pip install setuptools==65.5.0
|
||||
```
|
||||
|
||||
------
|
||||
|
||||
## Requirements
|
||||
@@ -242,6 +236,7 @@ source .env/bin/activate
|
||||
|
||||
```bash
|
||||
python3 -m pip install --upgrade pip
|
||||
python3 -m pip install -r requirements.txt
|
||||
python3 -m pip install -e .
|
||||
```
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
markdown==3.3.7
|
||||
mkdocs==1.4.2
|
||||
mkdocs-material==9.1.7
|
||||
mkdocs==1.4.3
|
||||
mkdocs-material==9.1.12
|
||||
mdx_truly_sane_lists==1.3
|
||||
pymdown-extensions==9.11
|
||||
pymdown-extensions==10.0.1
|
||||
jinja2==3.1.2
|
||||
|
||||
@@ -134,7 +134,9 @@ python3 scripts/rest_client.py --config rest_config.json <command> [optional par
|
||||
| `reload_config` | Reloads the configuration file.
|
||||
| `trades` | List last trades. Limited to 500 trades per call.
|
||||
| `trade/<tradeid>` | Get specific trade.
|
||||
| `delete_trade <trade_id>` | Remove trade from the database. Tries to close open orders. Requires manual handling of this trade on the exchange.
|
||||
| `trade/<tradeid>` | DELETE - Remove trade from the database. Tries to close open orders. Requires manual handling of this trade on the exchange.
|
||||
| `trade/<tradeid>/open-order` | DELETE - Cancel open order for this trade.
|
||||
| `trade/<tradeid>/reload` | GET - Reload a trade from the Exchange. Only works in live, and can potentially help recover a trade that was manually sold on the exchange.
|
||||
| `show_config` | Shows part of the current configuration with relevant settings to operation.
|
||||
| `logs` | Shows last log messages.
|
||||
| `status` | Lists all open trades.
|
||||
|
||||
@@ -227,8 +227,8 @@ for val in self.buy_ema_short.range:
|
||||
f'ema_short_{val}': ta.EMA(dataframe, timeperiod=val)
|
||||
}))
|
||||
|
||||
# Append columns to existing dataframe
|
||||
merged_frame = pd.concat(frames, axis=1)
|
||||
# Combine all dataframes, and reassign the original dataframe column
|
||||
dataframe = pd.concat(frames, axis=1)
|
||||
```
|
||||
|
||||
Freqtrade does however also counter this by running `dataframe.copy()` on the dataframe right after the `populate_indicators()` method - so performance implications of this should be low to non-existant.
|
||||
|
||||
@@ -187,6 +187,7 @@ official commands. You can ask at any moment for help with `/help`.
|
||||
| `/forcelong <pair> [rate]` | Instantly buys the given pair. Rate is optional and only applies to limit orders. (`force_entry_enable` must be set to True)
|
||||
| `/forceshort <pair> [rate]` | Instantly shorts the given pair. Rate is optional and only applies to limit orders. This will only work on non-spot markets. (`force_entry_enable` must be set to True)
|
||||
| `/delete <trade_id>` | Delete a specific trade from the Database. Tries to close open orders. Requires manual handling of this trade on the exchange.
|
||||
| `/reload_trade <trade_id>` | Reload a trade from the Exchange. Only works in live, and can potentially help recover a trade that was manually sold on the exchange.
|
||||
| `/cancel_open_order <trade_id> | /coo <trade_id>` | Cancel an open order for a trade.
|
||||
| **Metrics** |
|
||||
| `/profit [<n>]` | Display a summary of your profit/loss from close trades and some stats about your performance, over the last n days (all trades by default)
|
||||
|
||||
@@ -723,6 +723,9 @@ usage: freqtrade backtesting-analysis [-h] [-v] [--logfile FILE] [-V]
|
||||
[--exit-reason-list EXIT_REASON_LIST [EXIT_REASON_LIST ...]]
|
||||
[--indicator-list INDICATOR_LIST [INDICATOR_LIST ...]]
|
||||
[--timerange YYYYMMDD-[YYYYMMDD]]
|
||||
[--rejected]
|
||||
[--analysis-to-csv]
|
||||
[--analysis-csv-path PATH]
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
@@ -736,19 +739,27 @@ optional arguments:
|
||||
pair and enter_tag, 4: by pair, enter_ and exit_tag
|
||||
(this can get quite large)
|
||||
--enter-reason-list ENTER_REASON_LIST [ENTER_REASON_LIST ...]
|
||||
Comma separated list of entry signals to analyse.
|
||||
Default: all. e.g. 'entry_tag_a,entry_tag_b'
|
||||
Space separated list of entry signals to analyse.
|
||||
Default: all. e.g. 'entry_tag_a entry_tag_b'
|
||||
--exit-reason-list EXIT_REASON_LIST [EXIT_REASON_LIST ...]
|
||||
Comma separated list of exit signals to analyse.
|
||||
Space separated list of exit signals to analyse.
|
||||
Default: all. e.g.
|
||||
'exit_tag_a,roi,stop_loss,trailing_stop_loss'
|
||||
'exit_tag_a roi stop_loss trailing_stop_loss'
|
||||
--indicator-list INDICATOR_LIST [INDICATOR_LIST ...]
|
||||
Comma separated list of indicators to analyse. e.g.
|
||||
'close,rsi,bb_lowerband,profit_abs'
|
||||
Space separated list of indicators to analyse. e.g.
|
||||
'close rsi bb_lowerband profit_abs'
|
||||
--timerange YYYYMMDD-[YYYYMMDD]
|
||||
Timerange to filter trades for analysis,
|
||||
start inclusive, end exclusive. e.g.
|
||||
20220101-20220201
|
||||
--rejected
|
||||
Print out rejected trades table
|
||||
--analysis-to-csv
|
||||
Write out tables to individual CSVs, by default to
|
||||
'user_data/backtest_results' unless '--analysis-csv-path' is given.
|
||||
--analysis-csv-path [PATH]
|
||||
Optional path where individual CSVs will be written. If not used,
|
||||
CSVs will be written to 'user_data/backtest_results'.
|
||||
|
||||
Common arguments:
|
||||
-v, --verbose Verbose mode (-vv for more, -vvv to get all messages).
|
||||
|
||||
Reference in New Issue
Block a user