Automated trading algo generation - Page 3
Page 3 of 503 FirstFirst 123
Results 21 to 29 of 29

Thread: Automated trading algo generation

  1. #21
    Quote Originally Posted by ;
    quote1.2. It is possible to use something like a variance calculated only use systems and Sharpe ratio above a certain point. Reducing variance after the backtestings endings is the hard thing since you put in the realm of the unknown.
    Is not the denominator of the Sharpe ratio that the standard deviation of returns? I am not certain if I follow - substituting the standard deviation of returns in the denominator with the variance of returns transforms the formulation from Sharpe to Thorpe's constant optimal f calculation of the Kelly criteria. But I really don't think that's exactly what you intended. I apologize for being a bit thick.

    Quote Originally Posted by ;
    2.1. To be honest I have never noticed if the systems that fall into a negatively biased random walk possess a positive autocorrelation (do you mean a positive autocorrelation of the return collection?) .
    Yes, and it is Only a guess. If it really is a random walk then of course the returns are random rather than correlated. But if there's a trend (negative bias) in the random walk maybe this would show up on a statistical level (acf) on either the return or trade collection. Perhaps I want to do some simulations in R.. .

    Quote Originally Posted by ;
    I have some strict criteria for discarding systems and as soon as they're lost I monitor them. I generally discard systems in a 99.5% confidence for the tests I use for this use.
    This implies that there's a greater likelihood after the statistical criteria to discard is met, that performance will continue to deteriorate or not be positive. This seems. Or maybe it's just wiser to decrease the unknowns by discarding a system whose operation does not match its historical performance supply? Can a second theory test be used to turn a system back on after it has been deactivated? Can this off/on switch permit for lower thresholds than 99.5% assurance and thus quicker exits from downward equity curves and quicker re-entries to up sloping equity curves?

    I have some systems that need to be more re-optimized about every 6 months or performance deteriorates. It is also a fact that all systems have periods where they function well and badly. If performance degrades below a certain threshold such that the distributions are extremely different then it makes sense to discard the machine. However from a practical perspective, when you are at there you've typically already suffered a rather sizable drawdown. But I am not certain when I have any insight as to the way to overcome this problem. It's kind of like taking a halt loss. Your future reduction is limited but you've locked in the current reduction in the worst point up into the current moment.

  2. #22
    Quote Originally Posted by ;
    Yes, and it's simply a guess. If it really is a random walk then of course the yields are random and not connected. But if there is a tendency (negative bias) in the random walk maybe this would appear on a statistical degree (acf) on either the return or trade collection. Perhaps I want to do some simulations in R.. .
    When the variance of the RW is big enough the tendency is scrambled and no longer appear on an acf/pacf.

    Quote Originally Posted by ;
    Can a second hypothesis test be used to turn a system back after it has been deactivated?
    It is a good idea. A system that's very dependent on the market conditions will function poorly when the requirements are not met. Conversly it's assume to do best where the requirements are best. If the fantastic market state is infrequent the machine will show bad effects in long-term backtest. An individual could check the market and decide to turn the machine on or off. But you could also choose the periods of the backtest where the machine performs nicely and see if the pdf of yields are alike in these periods (they are not just lucky runs). If so the machine itself is an evaluation for the market condition being met. The Kullback--Leibler divergence between great interval performance versus recent performace, for instance, could be utilized to regulate the MM (rather than ON/OFF).

  3. #23
    Quote Originally Posted by ;
    The KullbackLeibler divergence between good interval performance versus recent performace, for example, could be utilized to regulate the MM (rather than ON/OFF).
    It been a while since I've posted! Spent most of the previous 12 months busy with a startup that has run its course so that I will get back to this something that I do thoroughly enjoy in a more complete time manner. Its also been good to see two of my favorite typical suspects making great contributions to FF! Also sub'd to Jo now. Pip and FXEZ hope you have both been keeping well and happy new year!

    Pip I Feel that the Kullback-Leibler divergence (KLD) is about the money. Its a way of measuring the gap between two distributions. Could also give a way to assess whether a system (given the correct distribution) should be on/off. Likely a PDF composed of a pair of metrics derived from what ever works. Could be an interesting way for a ML and chose between its ensemble of systems. Once I get a few systems back up and running within the next few weeks I will focus on this particular approach - initially focusing on simply creating PDFs by just transforming price information for periods before and when the systems operate back testing. Analysis of the consequent KLD time show could be fruitful.

    I have been using KLD to assess the PDFs based on tick rates quantity-per unit time plus time-per a fixed amount as well as the spread dynamics involving connected pairs in the search for a better result for my previous scalper. I originally stumbled upon KLD when evaluating compression algos for other work but it was recently refreshed in my memory after seeing it at a paper on an agent based approach to below status FX market structure (attached). Model in the paper - I really do like the abstractions for decision making based on exogenous and endogenous data. The endogenous data proxy is nice as well.

    Pip are you using FXCM as your back end?
    https://forexintuitive.com/attachmen...1529447310.pdf

  4. #24
    Quote Originally Posted by ;
    AlgoTraderJo, are you actually using Metatrader? I think the tool is really a significant part of the total problem.
    No, Metatrader 4 is a terrible platform. I exchange with Oanda, connecting directly through their broker API.

  5. #25
    Quote Originally Posted by ;
    quote Isn't the denominator of the Sharpe ratio that the standard deviation of returns? I'm not sure if I follow - replacing the standard deviation of yields in the denominator using the variance of yields transforms the formulation from Sharpe to Thorpe's continuous optimal f calculation of the Kelly criteria. However, I don't think that is what you intended. I apologize for being a bit thick. quote Yes, and it's simply a guess. If it truly is a random walk then of course the yields are arbitrary rather than connected. However, if there's a tendency (negative bias) in the...
    1. Yes, I'm referring to replacing the denominator using variance, which really does gives Thorpe's continuous best f calculation of the Kelly criteria.

    2. If the random walk is long enough that you won't see any acf, particularly if the negative drift is small compared with the standard deviation of this random walk. You may only observe an acf when the drift is quite large or when the standard deviation of this walk is little. In the case of systems that go to a negative biased random walk that the bias is that the spread, which is actually small compared with the standard deviation of returns to your egies.

    3. You can always opt for another confidence period but then your likelihood for false positives increase. However using a 99.5% confidence interval does not necessarily indicate you will go into deeper losses. Consider a worst case threshold based on a system that has to follow an exponential equity curve, so the system may be discarded just because it's flat and consequently stops complying with all the expected curve development expected from the version it should match after a time. Criteria that help you discard like this are particularly useful because it is possible to discard without higher confidence in shedding meaning milder drawdown. Systems can always be traded again if they get profitable after attaining a worst case threshold but then you want to realize that your worst case is now dependent on the new distribution that comprises the undesirable conditions you had to endure, so you have traded your initial strategy for a worse version of its initial implementation which may have worse discarding criteria.

  6. #26
    The great majority of the automated trading strategies that I saw are all about predicting where the price will go next. It may be another bar or a bar horizon. Let's say that over the next period the price is predicted to go up. Is it really a fantastic idea to just go wait and long what the next prediction will be? Let's take four trading brokers doing so but beginning their period at different points in time. Let's say they trade the daily and let's tug M30 at a potential chart.

    The first agent fortunately picked an entry near a low. The second entered on a breakout which has great chance to become re-tested. The third entered while the breakout completely lost its momentum and the final in today captured right in the middle of a congestion which could evenly break up or down. Despite they will all make money at the conclusion would you appreciate their individual position the same? Personally, I do not. The first broker holds a great entry that it ought to let run. The second is much better off taking profit today before the expectable re-test fixes the floating pips. And if it does not, better safe than sorry. The third could have better off not entered at each of the final is clearly in a beg and expect position.

    Let's pretend that for some reason they re-synchronize and take a decision at the re-test of their break-out. All of these predict the market will endure up. The first can easily lock the current trade at BE and open another long without raising its risk. If the second opens another position it will hold two positions at the same price. This is the same as a single over-leveraged position. The two others would add on a loser. Despite they will all make money at the end I can clearly not give the same virtue to those four trades.

    Let's now pretend that the next prediction is a down day. The 3 last brokers take their money and go brief without a question. But will the first agent really close the position and go short? The down day may not take the SL and the trend can resume after tomorrow . This prediction may also be wrong and a great position would have been squandered for a failure. Isn't the more term profit preferable to the immediate reward? Sure when it takes all of the ups and downs it ought to make more money. But this could come with a much bigger risk (and trading prices ).

  7. #27
    Your trading system can be a scalper or a long term egy. You may swing or favor breakout. You can pyramid or down. You're able to take partial profits or nhedge your rankings. You can trade based on FA or TA. All these different methods are basically doing the exact same thing: they modulate the exposure with time.

    FA is hard to incorporate in an quantifiable and automated manner so let's concentrate on the TA only. A fully automatic and autonomous trading process is no more than a job that takes the past price (and quantity if available) and outputs the optimal exposure, isn't it? Not because in my previous article I stated that the first broker should keep its long even though a downward prognosis while the three others are good to brief. Also opening a position while another is in the cash or if it's in loss aren't the same thing. Therefore I say that a fully automatic and autonomous trading process is no more than a job that takes your current exposure, your current open profit or loss, the past price (and quantity if available) and outputs the optimal exposure depending on risk and trading price.

    More importantly this does not necessarily imply to predict the price. I played chess against a master-ranked player. Needless say I lost. At some stage of this match he transferred his queen in the middle of the board. It's quite unusual to expose this bit that way. I was puzzled. Following three additional rounds I was check-mate. Of course I asked him why he transferred there. What did I miss? He just said that he didn't see anything. Simply he knew it was good to have the queen here based on what the board looked like. He didn't predict my following three moves. He just knew. After the price is in uptrend and retraced into some formerly broken resistance. You can not know if the level will turn encourage. You do not know the trend will last. You just know it's better long than short.

    What about a machine that learns the egy instead of the tactics of the trading game? How? By studying this exposure(.) function.

  8. #28
    Very interesting topic been attempting to create a system learning egy to maximize profits. I believe your right PipMeUp by learning this vulnerability your able to predict the market since you take a look at historical movements. This is how my strategy appears like to build my system... Lets look at this example using a martingale egy gives you the net vulnerability in pips, reduction and profit by putting trades/virtual your able to have a good idea in cows risk amount your trading by simply analyzing the losing trades..the greater the loss/lotsize goos the greater the chance is your able to get a high excellent signal. What I would like to do is filter out all these high looses because according to this info im able to predict the future... Another way to filter these transactions is simple would be your current commerce in profit or not? So you simply have a trade when its x level in profit(virtual trade). Another thing I like to say is that I believe the best defense egy is attack so trading multiple pairs is essential if you examine the sport itself its in continuous movement so in order to win future movements you have to think 5 measures ahead..so instead of taking 1 commerce you take 5 transactions where you split take profit in x amount of pips above or below price but you move all transactions to break .

    Now a problem we confront is these processes may occupy a lot of space based on how many transactions your taking... a lot of brokers allow max quantity of transactions on demo so I've been thinking to work with various accounts and filtering these transactions and copy them to 1 or even more accounts so that your able to filter out all the bad trades and maximize your profit by selecting the greater possible winning transactions.

    I've seen this man who apparently can run 100 mt4 platforms on 1 notebook http://www.nj4x.com/. . If we can do this than we can only run as numerous EA's as potential analyze the data filter them on negative and positive and automaticly copy the winning transactions.

    Anyone running insane amount of EA's? Im new to programming but im learning just hope anybody can help me out with couple things to get this job starting cheers.

    Will the pips be with you

  9. #29
    Hi Pip,

    Quote Originally Posted by ;
    ....Therefore I say that a totally automatic and autonomous trading system is no more than a job that requires your current exposure, your current open profit or loss, the past price (and quantity if accessible ) and outputs the optimal exposure with respect to risk and trading cost. More importantly this does not necessarily imply to forecast the price.
    Agree with you 100%. The notion that a trading system, possibly implemented in a fully automated manner, or to some extent (maybe entirely ) implemented manually (by a person) is a function. A challenge for the algorithmic or ML strategy is that a manually or human implemented system can easily incorporate new data as it arrives. Concerning the exposure function passing it an an arbitrary amount of price and quantity history can quickly become intractable or limit the methods we could employ. Conversely throwing off the historic data leads to data loss that a person trader does not endure - how easy is it to immediately scan back within a chart along with your eyes? So the challenge is to find a role that embodies the intuitive sweet spot that your chess master competitor operates in.

    Quote Originally Posted by ;
    ....At a stage of the game he moved his queen in the center of the plank. It is quite unusual to expose this piece that way. .... I asked him why he moved there. What did I miss? He said that he did not see anything special. Only he knew it was great to have the queen based on what the board appeared. He did not forecast my following three moves. He just knew.
    He knew it was a fantastic idea to put that piece in that loion at the point in the game. His intuition or his unconscious conscious thought processes knew it was the best move to make certain the current state of the chess board. Naturally his intuition is also backed by his experience as a chess master. We're attempting to come up with a function to handle entrances, exits and exposure - and it also must embody a traders experience to be able to place a transaction because the market looks great. So exactly what or how many inputs do we need?

    The challenge of tractability comes into play when considering input signals. How much information is needed to rate exposure with sufficient ability to become profitable? Do you just feed in the whole price history (quotes, range bars, period bars etc.)? Will a window over the information be adequate? What attributes from this data matter and how are they chosen?

    I appear to be posing more questions with no answers! Potentially this is true for using unsupervised feature learning (UFL) but again it comes back to inputs, for example applying UFL to a fixed size image means retains your inputs to say 8 million inputs for an 8 mega pixel image. In contrast our input information develops with each new quote. Plogically amounts from previous major events will impact human market participants but then maybe I need to throw that believing out and let the model understand. Time for some research and experiments.... My knowledge about this subject is more restricted than I need it to be.

    .

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
This website uses cookies
We use cookies to store session information to facilitate remembering your login information, to allow you to save website preferences, to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners more information