Starting around Aug 4th, for provider Yahoo! Finance most securities (stocks, ETFs) are showing 100x for volume:
Their website shows correct values.
This volume error is not present with other providers such as Morningstar, Wealth Data, etc.
Kindly fix.
Their website shows correct values.
This volume error is not present with other providers such as Morningstar, Wealth Data, etc.
Kindly fix.
Rename
It's showing correctly for me. Try deleting the corrupt data:
1. right click on the chart, choose "Reload chart data from provider" [per symbol]
2. DataSets > Data Truncation [on a per DataSet basis]
3. Historical Providers > right click, "Delete local files" [all data for provider]
1. right click on the chart, choose "Reload chart data from provider" [per symbol]
2. DataSets > Data Truncation [on a per DataSet basis]
3. Historical Providers > right click, "Delete local files" [all data for provider]
Issue still present.
What I did:
Since virtually all stock & ETF symbols were affected, option 1 was impractical. So first I tried #3 (deleted all local files + internal request tracking info) and re-downloaded data. As this didn't resolve the issue tried #2 - deleted ALL data for ALL Yahoo datasets, closed & re-opened WL7 (to clear all cache) and re-downloaded all data. But the issue is still present as you can see from the chart (pic 1); pic 2 shows the time-stamp of the Yahoo data files:
What I did:
Since virtually all stock & ETF symbols were affected, option 1 was impractical. So first I tried #3 (deleted all local files + internal request tracking info) and re-downloaded data. As this didn't resolve the issue tried #2 - deleted ALL data for ALL Yahoo datasets, closed & re-opened WL7 (to clear all cache) and re-downloaded all data. But the issue is still present as you can see from the chart (pic 1); pic 2 shows the time-stamp of the Yahoo data files:
We all know that Yahoo is a free and inconsistent data source, we offer the Provider anyway with that knowledge. This is clearly a Yahoo issue, not any kind of bug in our provider. Did you try emailing Yahoo support? Or, why not use WealthData?
That said, my Yahoo data is, like Eugene's, fine for XLB.
I use WealthData when I can but it only provides a limited number of symbols and I have 5000+ in Yahoo datasets.
Using Yahoo essentially as a temporary/backup source until Norgate comes out with a WL7 plugin... but I'm beginning to doubt if it will ever happen (the last note on their website is dt Mar'21... "under development").
It might indeed be a back-end issue with Yahoo... just noticed that it's also present in WL6.9, to a variable degree, with the same start date of 8/4/21. If you know their Support's email address do post.
Using Yahoo essentially as a temporary/backup source until Norgate comes out with a WL7 plugin... but I'm beginning to doubt if it will ever happen (the last note on their website is dt Mar'21... "under development").
It might indeed be a back-end issue with Yahoo... just noticed that it's also present in WL6.9, to a variable degree, with the same start date of 8/4/21. If you know their Support's email address do post.
I may add that Reloading data resolves the issue but that is clearly impractical for 1000's of symbols.
Guess bulk data download and individual symbol download access different servers.
Guess bulk data download and individual symbol download access different servers.
QUOTE:
It might indeed be a back-end issue with Yahoo... just noticed that it's also present in WL6.9, to a variable degree, with the same start date of 8/4/21
I had checked WL6 and it was correct for me as well.
QUOTE:
Guess bulk data download and individual symbol download access different servers.
No, WL7 hits only one endpoint at Yahoo.
It looks like Yahoo has some issues on only some servers in its farm. Unfortunate but hard to cope with automatically. And, I too am disappointed with the lack of progress on Norgate’s part.
Not to beat this issue to death... but I'm guessing you guys do *not* check Offline Mode in WL7 but I do (and Update Data on demand unchecked in WL6) as I presume its faster, esp when running backtests.
All the more important for me now that I frequently use a VPN which tends to slow things down (regardless of how fast your internet connection is).
-----------------
Maybe Norgate needs a helping hand, they're a small company. Wonder if they're bogged down by Covid restrictions down under???
All the more important for me now that I frequently use a VPN which tends to slow things down (regardless of how fast your internet connection is).
-----------------
Maybe Norgate needs a helping hand, they're a small company. Wonder if they're bogged down by Covid restrictions down under???
The usage of VPN can be a clue to explain the flawed data you're getting.
QUOTE:
The usage of VPN can be a clue to explain the flawed data you're getting.
But it doesn't seem to be affecting other data providers?? Also, refreshing the data from inside a chart (VPN on) seems to fix it - and it goes to the same endpoint, as you stated above.
Lastly, I *only* use US-based VPN servers so its not like Yahoo thinks this connection is from Timbuktu!!
Well, from this standpoint it seems like the server-side issue rather has to do with the parallel requests the provider is making when you do a bulk update or update the DataSet.
To check if VPN was indeed the causative factor I re-downloaded ALL Yahoo data in both WL7 & WL6 *without* VPN.
The volume error is still present, starting in early August. Reloading the data from inside a chart, however, corrects the volume.
That would seem to be the logical conclusion.
The volume error is still present, starting in early August. Reloading the data from inside a chart, however, corrects the volume.
QUOTE:
...server-side issue rather has to do with the parallel requests the provider is making when you do a bulk update...
That would seem to be the logical conclusion.
I'm failing to grok how any parallel processing here could cause the volume to be multiplied by 100?
It was noticed before (in WL6 era) that Yahoo! DataSet updates might fail occasionally, requiring several retries. This is just a speculation but I don't treat bad data returned by their backend if parallel requests are made as something impossible.
Your Response
Post
Edit Post
Login is required