Forum Replies Created

Viewing 15 replies - 16 through 30 (of 45 total)
  • Thread Starter miketemby

    (@miketemby)

    please provide the report number

    DHYKFNUM

    Can you also please respond to comments about the crawler not running after purge or TTL expiry.

    Thread Starter miketemby

    (@miketemby)

    The PHP file outputs the following:

    11
    2022-03-24 22:33:34
    2022-03-25 09:33:34

    See it for yourself here:
    https://gippsafe.com.au/time.php

    Thread Starter miketemby

    (@miketemby)

    for crawler with purge , let’s say you have 100 pages , crawler starts from page 1 , then page 2 , page 3, page 4 ….etc , let’s imagine at page 50 , you or something triggered a purge all

    before purge , page 1 – 50 are cached already , then at page 50 , purge all happened , then page 1 – 50 are not cached anymore , so crawler stops , reset and waiting to start from page 1 again as page 1 – 50 are no longer cached.

    NO, that’s not whats happening. Purge All is run before the crawler starts. When crawler starts AFTER that, it doesnt work… it just crawls 1 page and stops and outputs

    Last interval: 44s ago
    
    Ended reason: stopped_reset
    
    Last crawled: 1 item(s)

    As above, the same thing happens if TTL has expired.

    • This reply was modified 3 years ago by miketemby.
    Thread Starter miketemby

    (@miketemby)

    adding to the above, following last nights crawler run at 12:40am during which pages were given a TTL of 42 minutes, I now find that obviously my pages are not cached because they expired 7 hours ago but now when I manually run the crawler – again it just hits one page and returns the same message as if the Purge All had been run.

    Last interval: 44s ago
    
    Ended reason: stopped_reset
    
    Last crawled: 1 item(s)

    I need to then run it again for it to crawl the 14 mapped pages.

    This makes no sense – please explain the logic behind this. It seems quite simple to me, a crawler should find pages not cached and crawl them to rebuild the cache. But LS Cache seems to do the opposite. It finds uncached pages and decides it should stop because they are not cached…

    Thread Starter miketemby

    (@miketemby)

    the timestamp date( ‘Y-m-d H:i:s’, time() + LITESPEED_TIME_OFFSET ), and define( ‘LITESPEED_TIME_OFFSET’, get_option( ‘gmt_offset’ ) * 60 * 60 ) ; it is retrived from wordpress timezone option

    Clearly this is not the case. I have Melbourne time zone set in WP.
    Local time currently is 8:56am. Scheduled Purge shows the current “server time” as 9:56pm i.e. 11 hours different which is equal to my timezone….
    For me to set the scheduled purge to occur at 12:01am, I have to put 1:01pm in the field…
    Last night, the Scheduled Purge field contained the value 2:23pm and this is what the Crawler Log shows:
    03/25/22 00:40:39.009 [103.42.111.114:33856 1 zLD] [Ctrl] X Cache_control TTL is limited to 2541 due to scheduled purge rule
    So at 12:40am it wanted to put a 42 minute TTL on the cached page so it expired at 1:23AM… not 2:23pm which is what it should do if it applied the logic you mentioned above.

    In fact, now that i’m looking at it, it seems like it’s adding my timezone offset (+11), to the time that I put in the field, because 2:23pm AEDT = 3:23am UTC but it has set TTL to expire are 1:23am.. so it’s wrong two ways…

    it is designed that when purge all happens , it will stop the crawler , because the crawled page is not longer cached and needs to re-cache from start

    ..wait…what? So the Crawler.. that is there for the sole purpose of Crawling the site so it IS cached, decides that if there is not cahce, it wont cache, because theres no cache….??? Can you please explain this in more detail? Why would you not want the crawler to crawl and cache pages that ARE NOT cached? What am i missing here.. that’s its entire point is it not?

    Thread Starter miketemby

    (@miketemby)

    ..above should say stopped_reset not end_reset

    Thread Starter miketemby

    (@miketemby)

    Can you please clarify what you mean here?

    The image in question does not exist anywhere else on the page. It is gathered by image id from RM settings and the meta tag is built from that – it is not gathered from elsewhere on the page so how can LS Cache be re-writing it?

    Are you saying that LS Cache is re-writing all internal image urls even where they don’t exist on he page, so any function handling an image url will automatically get the re-written url even if it has nothing to do with the page?

    This explanation actually makes no sense, because LS Cache is supposed to be re-writing urls based on HTML attributes…. it has no HTML attribute before it is inserted onto the page within the meta tag on which it’s HTML attribute is content.

    • This reply was modified 3 years, 3 months ago by miketemby.
    • This reply was modified 3 years, 3 months ago by miketemby.
    • This reply was modified 3 years, 3 months ago by miketemby.
    • This reply was modified 3 years, 3 months ago by miketemby.
    Thread Starter miketemby

    (@miketemby)

    ..and here is the log if LS Cache doing the re-write. For some reason it tries to immediately rewrite the rewritten path too.. odd.

    11/30/21 11:41:45.084 [ 1 2UE] [CDN] rewrite https://dev.haemochromatosis.org.au/wp-content/uploads/2021/11/fb-og-image.jpg
    11/30/21 11:41:45.084 [ 1 2UE] [CDN] -rewritten: https://cdn.dev.haemochromatosis.org.au/wp-content/uploads/2021/11/fb-og-image.jpg
    11/30/21 11:41:45.085 [ 1 2UE] [CDN] rewrite https://cdn.dev.haemochromatosis.org.au/wp-content/uploads/2021/11/fb-og-image.jpg
    11/30/21 11:41:45.085 [ 1 2UE] [CDN] -rewrite failed: host not internal
    • This reply was modified 3 years, 3 months ago by miketemby.
    Thread Starter miketemby

    (@miketemby)

    Nevermind, I fixed it myself with some code.. this really shouldn’t be required!

    /**
     * Filter the Modern Events Calendar - Category Taxoonomy to register more options.
     *
     * @param $args       array    The original tax args.
     * @param $tax  string   Taxonomy key.
     *
     * @return array
     */
    function custom_mec_event_cat_permalink( $args, $tax ) {
        // If not a MEC Events Cat, bail.
        if ( 'mec_category' !== $tax ) {
          return $args;
        }
        // Add additional Taxonmy args.
        $tax_args = array(
          'rewrite' => array(
              'with_front' => false,
          )
        );
        // Merge args together.
        return array_merge_recursive( $args, $tax_args );
      }
      add_filter( 'register_taxonomy_args', 'custom_mec_event_cat_permalink', 10, 2 );

    I see the same issue was addressed on Event archive pages nearly two years ago.. https://www.ads-software.com/support/topic/fixing-events-permalink/

    • This reply was modified 3 years, 3 months ago by miketemby.
    • This reply was modified 3 years, 3 months ago by miketemby.
    • This reply was modified 3 years, 3 months ago by miketemby.
    • This reply was modified 3 years, 3 months ago by miketemby.
    Thread Starter miketemby

    (@miketemby)

    All you need to do is install RM and add a FB default image either during the setup stage, or afterward in RankMath > Titles&Meta > Global Meta under OpenGraph Thumbnail – this will be used as the default for all pages.

    I have done some digging into the RM code and I can see that the image URL is being altered by LS Cache well before the meta tags are actually constructed and output on the page.
    I debugged the core variable holding the image meta data in the middle of the process, well before it is compiled into the html meta tag, and it already had the cdn prefix…

    Array ( 
        [id] => 1623 
        [url] => https://cdn.dev.haemochromatosis.org.au/wp-content/uploads/2021/11/fb-og-image.jpg 
        [width] => 1200 
        [height] => 630 
        [type] => image/jpeg 
        [alt] => Haemochromatosis )
    
    Thread Starter miketemby

    (@miketemby)

    ok, well these particular meta tags are added by RankMath. The image sources are local in RM but once I turn on CDN mapping in LS Cache, they get rewritten to use the CDN.

    CDN Mapping Example

    I also tried manually adding a meta tag and it did not get re-written which makes me think that either LS Cache has some edge cases included for Rank Math, or the manner in which RM is adding them causes LS Cache to re-write them anyway.

    • This reply was modified 3 years, 3 months ago by miketemby.
    Thread Starter miketemby

    (@miketemby)

    3.6.4 at the moment on the site it just occurred most recently on. It actually occurred twice this week on two different sites.
    What is UCSS?

    I will point out that this has occurred many times over the years, it’s just that I only now investigated it deeply enough to determine that the combined file was not being built properly. And I will also point out that at has occurred when the plugin was up to date at the time it occurred.
    I’m pointing this it in preparation for you telling me to update to the latest version and see if the issue persists….
    If you are unfamiliar with the issue existing, there is no reason the latest release would fix it.. fixes don’t tend to be developed for bugs that are not known.

    • This reply was modified 3 years, 7 months ago by miketemby.
    Thread Starter miketemby

    (@miketemby)

    @qtwrk as it always occurs in production sites, I do not have debugging enabled so no errors available.

    I will attempt to produce the error manually in a dev environment by setting a very short ttl, however I am unsure of the exact conditions required to for the issue to present itself so this may not be achievable.

    One observation I have made, which may simply be a coincidence and due to the nature of most css files, but the “break” that I have observed has been where comments are on the css file. But this may simply be that most css files begin with a comments block… and the break occurs on a new css file…

    Thread Starter miketemby

    (@miketemby)

    Hi

    I have done some detailed investigation and I must apologise. It turned out that an empty line before the opening <?php tag, on one of the included php files within the child theme was causing the issue.
    It’s quite bizarre that the only issue this caused was this line return before the nonce value when ESI is turned on in LS Cache but hey…

    https://www.dropbox.com/s/3ndz32w7zf203jn/EOF%20Error.png?dl=0

    Thread Starter miketemby

    (@miketemby)

    ..right, so you take 3 weeks to respond to the post but you close it because i didn’t respond in 6 days..

    You can reproduce it by turning on ESI….

    Divi outputs a variable called et_pb_custom which contains a heap of stuff including a nonce called et_frontend_nonce. As per the second screen shot, this nonce is included by default in the list of predefined ESI nonces.
    When you turn ESI on, as per first screen shot, the format of the variable is broken (by adding a line break before the nonce) causing javascript errors…. “unexpected EOF”

    • This reply was modified 3 years, 8 months ago by miketemby.
    • This reply was modified 3 years, 8 months ago by miketemby.
    • This reply was modified 3 years, 8 months ago by miketemby.
Viewing 15 replies - 16 through 30 (of 45 total)