Jump to content

FireWire

Members
  • Posts

    651
  • Joined

  • Last visited

  • Days Won

    47

Everything posted by FireWire

  1. Them: "So what about using WordPress for the website" Me: "WordPress is great, I make a lot of money helping people who own WordPress sites use it."
  2. Thanks to @teppo for the ProcessWire Weekly shoutout! I wanted to share some updates to show I haven't been a lazyass some more detail and information since my last update post. Caching is optional and can be cleared. Translations are persisted for 1 month which helps even out month-to-month API usage, but also expires so if DeepL improves their translation then you still get the latest and greatest. I haven't formally timed it but cached translations feel near instantaneous. Modified content? If the content of a field has changed since the page edit screen was loaded- then the tab text is italicized and an accent is added. This tracks changes whether you typed the text, or if you used the translator to change the content. It's stateful so if you return the content to the original value, the accent is removed to indicate that the content has not changed. Oh, and this is tracked independently for each field, and separately for each language. So it is now much easier to see if there are fields that weren't translated. Heck, it even helps users make sure they didn't miss anything anywhere before they hit Save. Table fields are now supported. I forgot to mention that before. Fluency has been tested with all ProFields and is 100% compatible. TinyMCE is ready to go and, as promised, CKEditor regular and inline are also supported Error handling! Fluency is aware of all errors that are possible to be received back from DeepL. It's also smart enough to know if the DeepL service is unavailable or if you lost connection to ProcessWire (what wifi?). This helps make it very clear where the error happened, which may save you some headaches during development (and maybe emails from your clients/users later). In my case DeepL rejects requests when I'm on a VPN even if my internet is workin' alright. Localization is here. Every single aspect of Fluency is translatable using one central file in the admin language configuration. That's it. No chasing different files in the module to translate things. English is the default language in these pics, but it also shows the proper language if your default isn't. Achtung! Even the errors speak your language. I got you, international friends. For the nerds... One field, one file, full documentation. Each field adheres to standardized public interface methods so they are modular and completely independent of the rest of the codebase. If the old code was structured like this TinyMCE would have been ready the day after it was announced by Ryan. Not every inputfield needs its own module because some fields just use others- a repeater containing a textarea still just counts as a textarea. Less module updates, more translating! All of these are core Fluency features. I'm prioritizing this over any work on a Pro module to get this out for testing and into everyone's hands sooner. Still work to be done, already 3x more code than the last version, but we're bringing the ? More updates to come!
  3. So, I wasn't ripping on JS. I was expressing frustration that I think a lot of people have about the ecosystem. I don't use jQuery but I used it as an example of bloat growth. My point really is the JS-ification of everything and how that mentality has caused a high-intensity recursion of problems stemming from forcing a language written for the browser into a server. I remember when it was first announced that someone was got the V8 engine to run in a server environment and the first thing I thought was "well, that sounds like a box of headaches". The main point was to throw some extra things in about the point that the first video makes: traditional server side programming languages and their frameworks (like Laravel) are not only very capable, they're enjoyable to use. They're also incredibly stable and PHP overall is so much easier to work and upgrade over time. That's not a JavaScript problem, it's not a PHP problem, that's an entire internet problem ha! Angular is a front end framework. It's also a big example of a framework with a rough history of development. Used to introduce a ton of breaking changes regularly and went through a complete rewrite. It wasn't a great developer experience. I was learning it years ago, I watched a presentation by Google where they made a list of all of the breaking changes that would be happening in several months without a major release version. This was after they split it off from AngularJS. I walked away from it. This is a great example of great JavaScript and what a focused UI library can be. I was mainly using React as a punching bag because the video above it was talking about server side components for React. I'm not anti-JS! I mean, yes, but there are a small number of people that have to worry about speed at a scale that it would matter. Tons of applications and websites of all sizes run on "slow" languages and environments like PHP and Ruby, but in the real world the end user isn't going to notice unless something is really really wrong with the code or the server. Where NodeJS is really useful is handling Socket connections. It's really the only choice. I've played around with a tiny SocketIO server I built to it's pretty great. That said, I'm not changing my entire server environment just to implement that. I'm building an application that revolves around timers and control messages between clients and there are services that do this better than managing your own code/server, especially when geolocation is concerned. At that point network latency is a bigger issue. PHP8's JIT compiling will continue to push speed. Is it going to match Node? Nope. Am I worried about it? Nah. Should anyone reading this right now? I'd love to see what you're working on if you are... That was hilarious, but the story behind it also says a lot about the state of package management in JS. NPM was a massive jerk. Yes! I'm using InertiaJS with Laravel and Vue. The reason I chose Vue was pretty much just because it's very commonly paired with Laravel so it's good skill experience. Inertia is really great and simplifies a ton of stuff out of the box. Like I said- having a great server side application and a solid JS front-end is where it's at.
  4. I'll go ahead and invite myself to share some JS thoughts nobody asked for... The don't-reinvent-the-wheel message in the "Do not use React Server Components" video is good, but I think there's a lot more to it when it comes to full stack JS overall. Not only are JS devs largely solving problems that have already be solved, the JS ecosystem is constantly solving problems that it causes for itself. Obviously new features can cause issues to address- but it feels like JS really takes the cake on this one. Hydration and rendering issues were pretty much inevitable when you start overloading the front end and forcing the browser to do so much heavy lifting. Wait... sending a blank page that waits for JavaScript murdered your SEO?! Rendering pages on the server is relatively simple, rendering them in the browser- boy howdy... So then the JS devs cheered when it invented server side rendering... for JavaScript. Then there's the bundle sizes, which is noted in this article about React Server Components: So now they're solving for asset sizes, another layer of complexity to solve an issue that didn't exist (at the level it does now) until massive front end frameworks caused them. We used to worry (showing my age here) about jQuery library sizes... React is excited about saving 240k while the entire jQuery library is 85.4k (uncompressed), and all you have to do is redesign your app architecture with server side JS! BRILLIANT! Before someone interjects to correct me, I'm not saying that React and jQuery are the same, or comparable in functionality and purpose. I am saying that the JavaScript-first approach allowed a lot of people to accept compromises to create the Next Hot Thing™ and a lot of problems with it. JavaScript on the server pretty much just made this more complex by providing increasingly complex ways to make itself more complex- lol, recursion. "But", the full stack JS dev will say, "you can use the same language everywhere so there's consistency." My brother in Christ, you don't even import code between files the same way in Node and and client side ES6. So when I read this in that video: This is from a developer who goes to bed at night thinking about how their Next.js application codebase might be outdated by the time it launches. Then I go back to feeling like this: I used to kind of stress about not pursuing full stack JS, then I realized that I'm happy. Having a front end (whether SSR or a JS framework) and a back end (CMF or non-js app framework) keeps me sane. I know where things go, what they do, and why. There's consistency between projects. It's easier to maintain best practices. My NPM headaches are smaller. My PHP doesn't need nodemon or pm2 to restart the server application if there's a JS error. The amount of documentation I read that has a big red box in the middle saying "Note: this thing that you've built several apps with and know well is going away. We're destroying it because someone on Github said we need to move to stateless functions. Soon there will be pain, and you will cry." is very limited. I made that one up, but you get the idea. I'm not dragging front-end JS frameworks per se, but making the case for why they should not pave the way to the JS-ification of all that is holy and aren't automatically better. Maybe putting JS everywhere has made SPA solutions worse because they didn't leverage how well traditional server side languages worked already. Went off on a tangent here, but I've had a few moments of zen this year about being a PHP developer and wanted to share the energy. All that aside, to answer your question @wbmnfktr I've wanted to pair up a front end framework with PW but haven't had the time or right project to give it a try with. I do like the idea of using PW as a JSON delivery backend in general.
  5. I think that you make a good point about the frontend/backend crossover. Writing the same language everywhere introduces some comfort in mixing up what does what and where. Then there's the perception or need for performance, "how can we reduce API calls to make our application 'snappier'". There's an entire generation of developers that have only ever known full stack JS, massive front ends, and a browser-focused approach. Like the video above said "you're solving problems that were already solved". I think there's a great place for sane client side UI frameworks and server side frameworks. It does create a great opportunity for user experiences.
  6. I'll share a bit of a sidenote on the language overall that I think may start to appeal to developers again who are giving PHP a try with or without a framework. PHP 8.0-8.2 has changed the way I write code significantly and features new and old like arrow functions, enums, union typed properties, nullsafe operators, named arguments, return type hinting, constructor property promotion, etc. are fantastic. Performance gains keep rolling in as well. I've enjoyed seeing the deprecations and cleanup in the language- even just committing to using typed properties and return type hinting make so much of a difference in how the language feels. I like the first video above where the person focuses on how enjoyable it can be and I'm always hesitant to give much weight to a JS developer who has some really outdated opinions. For example, these two sets of code do the exact same thing: I think that says a lot and the new features and style of the language are going to continue to boost frameworks as they adopt them as norms. Developers returning to PHP or trying for the first time are going to see something that feels really new. I've been building a Laravel/Vue application and InertiaJS feels like magic the way it marries the SPA front end with the back end. Really enjoyable.
  7. The planning is 100% with an eye on the future. Thank you for the kind words and support!
  8. I'm glad you asked this! This should be pretty much a drop in replacement. The only thing you'd have to do to upgrade to the new version would be setting the language associations again and transfer any settings (like global excluded strings and API key) in the module config. Upgrading later will not cause any loss of website content and the work for developers should be minimal. The new features are implemented behind the scenes, so you just get more cool stuff ?. ProcessWire makes it possible to change editors between TinyMCE and CKEditor without re-creating the field, so using CKEditor editor now and then switching to TinyMCE later shouldn't cause any headaches. It really should be as easy of a transition as you can get with how well TinyMCE is concurrently being implemented (props to @ryan). As for the module, Fluency doesn't care what field/editor you use or if it's changed at any time because fieldtypes are detected and initialized by JS at runtime when the UI loads, not on the back end. So good news- I don't believe that there is anything that keeps you from using Fluency now and upgrading later. I've been working regularly on the module lately, my goal is this month or next. I think a wildcard will be testing but I'll be requesting help from the community here when it's time. I want this version to be the one that breaks out of alpha. Many thanks and I'll report back here when I have updates.
  9. Hello all! I've compiled a list of features that Fluency will have on it's next release and currently under development. There are a lot of new features and, as mentioned, it's being completely rewritten. I want to share some notable features that I think will be some great additions. IMHO, I believe that this puts Fluency in a first-rate position to make ProcessWire easier and more powerful for multi-language sites compared to other CMS/CMF platforms. If you've tried to use translation modules/plug-ins for other platforms I think you'll agree. As you can imagine, the feature expansion makes the module very powerful but it also means a lot of extra work. There will be an upgraded version available named (unsurprisingly) FluencyPro. Before I get into that, my commitment is to make the non-pro version the real-deal when it comes to quality and features. No features will be taken away from the core Fluency module and moved to the premium module. As you'll see below, the majority of new features will be available in the Fluency core version. Every feature in FluencyPro is new and complements the Fluency module that is and will always be free. I also want to make the Pro version very reasonably priced so it can remain within reach for as many people as possible and an easy sell to clients. The price will help offset the time and effort it takes and the support is greatly appreciated. More details to follow on that later. I chose the new features based on feedback from the community as well as my usage for sites I've built. Here is a list of new features that are coming in the next version: Fluency (core) Continuity - All existing features will still be available Localization - All Fluency admin UI elements can be translated to all languages present in ProcessWire. Includes the translation trigger buttons on each field, the global translation tool in the menu, etc. Error Handling - Will indicate things like the DeepL service not being available, usage limit reached. All errors will be translatable. Improved Module Config - Cleaner and more organized module config screen Inline Fields - Support for inline fields TinyMCE - Support for the new rich text editor CKEditor - Continued support for CKEditor to keep the module backwards compatible with ProcessWire sites that don't/can't use TinyMCE Per-language change indication - When content is changed in a field, the corresponding language tab will indicate that the content has been changed. This makes it easy to see where content should be updated in other languages. Translation Caching - Translations will be cached so that additional requests for the same content will not require additional DeepL API calls. This can help keep monthly DeepL API usage lower where possible and make repeated translations lighting fast. Cache can be manually cleared in the module config screen Logging - Improved Remains free - Free forever and will be open-sourced FluencyPro Multi-Language - Translate any content to any language for a field. Multi-Language - Translate any content from any language for a field. One Click Translate All - Any language to every other language for a field. This makes using a wider array of languages trivial when editing pages Markup Companion Module - Optional FluencyMarkup module provides individual easy-to-use methods for markup output to the front-end. This makes Fluency a complete translation solution out of the box. Features: Render all languages as prebuilt `<a>` links to navigate between page languages. Render a self-contained prebuilt `<select>` element with one click switching between languages on the front end while needing no additional JS required in your code. Inline JS is optional for those who prefer to implement their own. Render all alternate language `<link rel="alternate" hreflang="{ISO code}" href="{alt language URL}">` `<head>` tags to indicate that the page has additional versions in other languages. Great for SEO and adhering to HTML standards. Output the current language ISO code. Useful for turning `<html>` into `<html lang={ISO code}>` in keeping with HTML best practices. All render methods are hookable for additional customization by developers Notable overall code improvements: Written using PHP 8.0 (required for use) JavaScript rewritten in ES6 to make use of newer language features, is transpiled to ES5 to maintain browser compatibility. Transpiling is pre-configured and included with the module. Modular per-inputfield JavaScript so that adding new field types and updating existing ones in the future is faster and easier. Standardized composition so interfacing with fields within code is uniform and predictable. Server-side module code is being rewritten with the future potential to add additional "translation engines". While there won't be additional services available now, there will be a more standardized interface to make adding others easier. Both versions of Fluency will have this feature and can benefit from new translation services other than DeepL. This won't be 100% ready on release, but it is being kept in mind now. Future translation engines will have access to a uniform interface to use the Fluency caching feature with no additional overhead during development. Improved RESTful style API within the admin which lets other modules or PW customizations implement separate translation features using AJAX requests Leverages ProcessWire's built in JavaScript config object (as opposed to a separate API request on page admin page load) to speed up UI initialization .prettierrc and .editorconfig files to make contributing easier Thanks everyone for your patience, really excited for the new features! Cheers!
  10. Ryan has already responded to the Github issue and will merge the fix as soon as it can be confirmed. Passing along a PW API method call he shared that can fix this in case anyone that runs into this and wants to skip working in the DB directly.. $query->exec('RENAME TABLE tmp_field_body TO field_wvprofile_body'); Thanks @ryan!
  11. Confirmed renaming the tmp field table to the original table name solved the issue. Many thanks for your work!
  12. Fantastic work! That table does exist in my DB. As for DB table names being lowercase by default, that seems to conflict with the admin UI. If the field lets you enter uppercase names, but table names default to lowercase, I think this issue is pretty much guaranteed to happen. I think that because the UI has shown that uppercase letters are accepted and it's been that way for a long time then the solution would be for $config->dbLowercaseTables to default to false. That would preserve the behavior shown in the admin and fix the issue going forward. But, I too, have started to dig deeper. I did a quick search because I was curious about case-handling in MySQL itself and if there might be additional scenarios to consider. Database case sensitivity is dependent on the filesystem of the underlying OS. Windows is not case sensitive, Unix-like systems such as Linux are case sensitive, macOS is almost always the exception to that rule where it's a Unix-like system but HFS+ is not case sensitive and APFS can optionally be (but only at the time of formatting the disk) and is not by default. Even Docker on Mac looks like it struggles with this. So I'm wondering if this is an environment issue and having $config->dbLowercaseTables is true to mitigate potential issues. Regardless though, the admin UI/name field validation should reflect the current DB casing configuration. The better option would just be to eliminate using uppercase letters for fields in the future if it would act as a sort of protection for all environments.
  13. There was nothing special in the setup, it's a very similar setup to what I usually use on sites. Really appreciate you looking into it and running some tests! This is it. It was the uppercase B in Body that I accidentally put into Name instead of Label that conflicted with another field named body. Really appreciate @Robin S @wbmnfktr and @iank working to figure out what happened, I don't have the time to troubleshoot right now (deadlines!) and you really helped out. If I didn't find out what happened I would be kind of uneasy about the website. Ideally I think it would be best if field names were restricted to lowercase, but that would be a big breaking change at this point. Many situations could cause this: Text intended for the Label could mistakenly be put into Name and all data is lost for that field (my case) A developer may be familiar with this bug, but not know or have forgotten that a field already exists with a name that will conflict Accidentally mistyping with a capital letter where a lowercase letter would have shown the appropriate duplicate field name error I have opened a Github issue for this bug.
  14. This was on a production server so debug is off, no warning. I am familiar with that data loss warning and didn't see it. I've only seen that when attempting to delete a field or change its type. v3.0.165, so stable and not the latest. I was able to restore the data by pulling a table from an SQL backup and running a query to create the table and insert all of the data. Was the main text field on ~120 pages so there was a chance for a lot of catastrophic data loss. ProCache kept the content live while I restored this but it was a real not-cool moment... ProCache and Cronjob Database Backup can go a long way to save a site with no downtime. Just still really not sure if this is something everyone needs to keep an eye out for or if it was a one time glitch.
  15. Like the title says. Accidentally renamed "wvprofile_body" to "Body" where I actually meant to change the label. Didn't notice that I had entered that into the wrong field, hit Save. ProcessWire destroyed the wvprofile_body table then showed me an error saying that the field "body" already exists. I have no idea what happened. I've never had this happen before, but I've also never accidentally renamed a field to an existing field name. Has this happened to anyone else? Doesn't PW check for a unique name before deleting data? Looked in the database directly, the entire wvprofile_body table is gone. Keep backups.
  16. I've been following the news on the new editor and I'll be adding that to the next version of the module. The module is already getting refactored so I'll be able to plan for this much more easily. I'll work on posting some updates here with some details on what's planned!
  17. @Robin S Brilliant! I don't know why I didn't think of that... Thanks!
  18. Hello all, I am working on a module and wanted to know if there was a way to get deeper nesting in module admin URLs. Basic: /admin/coolmodule Renders with ___execute() in the module First child: /admin/coolmodule/info Renders with ___executeInfo() in the module Would like to create these: /admin/coolmodule/foo/bar /admin/coolmodule/foo/bar/baz Is there a way to do this virtually like __execute() and __executeInfo() or something similar? Can URL hooks be used within the custom module page? Thanks!
  19. The logs show that this error has been happening for this version and the last version. I checked that the version of PW was compatible before upgrading the PHP version to 8 on the server. I inspected the network requests and it looks like this is a ProDrafts issue. It's attempting to save a field's changes. I was able to trigger this by creating a new page where it immediately detects a change to that field and shows the alert error. I attempted to fix this issue by adding a RegEx in the module's config but no dice. Using ProDrafts version 0.8.0 Tough to figure out what could be causing the conflict since it looks like like it's isolated to ProDrafts, but also tough to figure out since it doesn't seem to be happening more widely to other ProcessWire devs.
  20. We are seeing consistent errors for a few fields. They pop up as browser alerts when editing pages and I'm not sure what's causing it. This error is thrown from a PW core file so not sure where to start with this. Seeing errors such as: "Field '_pw_page_name' is not applicable to this page" "Field 'status' is not applicable to this page" "Field 'wrap_delete_page' is not applicable to this page" This is happening for all users regardless of permissions and when editing any page. ProcessWire 3.0.2 PHP 8.0.16 From the looks of it I'm only seeing this happen with core fields, not fields I've created.
  21. The files you work with will always be on your local machine. And I see no foolishness here :) Working inside the container itself (using the shell script in the Devilbox directory) shouldn't be needed so no worries there. Some recommendations (and me thinking out loud)- If you're getting a 500 error with a page title before failure then PHP is booting properly. I would start with ProcessWire's logs first in case it's able to catch that error when it executes. Could be more info there. Double check that $config->debug is set to true in config.php to see if you can squeeze out any more information from your 500. Check your .env config to make sure you're running either PHP 7.4 or 8.0 (depending on your PW version) just in case Devilbox's defaults to PHP 8.1 which PW isn't fully compatible with yet AFAIK. I think it might do that IIRC. Knock out those items and we can troubleshoot from there if it doesn't fix the problem.
  22. Nice! Very awesome seeing the concept worked on and a great contribution to the ecosystem. Working with time bends my brain, you have my respect and admiration haha. Looking forward to seeing your work. Haven't used Alpine yet personally but it looks fantastic.
  23. A couple months ago I reached out to the original author of Recurme and asked about the possibility of open sourcing the module and taking a role as maintainer. I've used this module previously and think a good calendar module is critical for the PW ecosystem. Had a great exchange with him and he was open and willing to turn over any assets needed and allow the code to be open sourced with no restrictions. As far as I know the module is still largely usable with the tweaks people have mentioned in this thread but I can't speak to more than that. I started working on refactoring the code and have a few ideas on how to improve/revamp it. I'm still enthusiastic about the idea of keeping development alive but unfortunately I just don't have the time right now to take it on with how busy I am with non-programming stuff as well as my commitment to getting the next version of my Fluency module released. I'd love to see this open sourced and would count myself as a contributor or help out where I can, but being a maintainer/lead is outside of my abilities right now. Hope there is some interest or capacity out there to make this happen.
  24. What the hell are "outdoors" and "offline time"? I've never heard of this.
  25. Back with more! Prepare for incoming wall of text... I mentioned adding custom directives to our .htaccess file and wanted to share some more detail on that as well as some other tips. I was reviewing our 404s as a matter of maintenance so to speak to ensure that we had redirects in place as necessary. While reviewing that I found a lot (a lot) of hits that were bogus, clearly bots and even web crawlers for engines we have no interest in being listed for. What I found was in just 48 hours we had 700 total 404s and I imagine on some websites that number could be higher. By analyzing that log and writing custom directives I was able to take 700 404s logged by ProcessWire down to 200 which are "legitimate" in that it's traffic that to be redirected to a proper destination page. I'm sharing my additional directives here as an example. Again, ANY bot/security directives should be at the very top of your .htaccess file. As always, test test test, and modify for your use case. # Declare this at the top of your .htaccess file and remove or comment out all other instances of this directive elsewhere RewriteEngine On # Block known bad URLs # Directories including sub-directories RedirectMatch 404 "\/(wp-includes|wp-admin|wp-content|wordpress|wp|xxxss|cms|ALFA_DATA|functionRouter|rss|feed|feeds|TKVNP|QXXLZ|data\/admin)" # Top level directories only - There are no assets served from these directories in root, only from /site/assets & /site/templates RedirectMatch 404 "^/(js|scripts|css|styles|img|images|e|video|media|shwtv|assets|files|123|tvshowbiz)\/" # Explicit file matching RedirectMatch 404 "(1index|s_e|s_ne|media-admin|xmlrpc|trafficbot|FileZilla|app-ads|beence|defau1t|legion|system_log|olux|doc)\.(php|xml|life|txt)$" # Additional filetypes & extensions RedirectMatch 404 "(\.bak|inc\.)" # Additional User Agent blocking not present in 7G Firewall <IfModule mod_rewrite.c> # Chinese crawlers that cause significant traffic to bad URLs RewriteCond %{HTTP_USER_AGENT} Mb2345Browser|LieBaoFast|zh-CN|MicroMessenger|zh_CN|Kinza|Datanyze|serpstatbot|spaziodati|OPPO\sA33|AspiegelBot|aspiegel|PetalBot [NC] RewriteRule .* - [F,L] </IfModule> Details on this additional config: It blocks some WP requests that get past 7G My added directives redirect to a 404 which tells the bot that it flat out doesn't exist rather than 403 forbidden which could indicate it may exist. I read somewhere that this is more likely to get cached as a URL not to be revisited (wish I could remember the source, it's not a major issue). Blocks a lot of very specific URLs/files we were seeing Blocks Chinese search engine bots, because we don't operate in China. These amounted to a lot of traffic. Blocks common dev files like .bak and .inc.* which aren't protected by default. Obvs you want to eliminate .bak altogether in production, but added safety fallback. I have not seen this cause any issues in the Admin. Also consider if directives could cause problems in another language. Customize by reviewing your logs Additional measures 7G and the directives I created are a healthy amount of prevention of malicious traffic. Another resource I use is a Bad Bot gist that blocks numerous crawlers that add traffic to your site but may or may not generate 400s-500s HTTP statuses. This expands on 7G's basic list. Bad Bot recommendations: Comment out: SetEnvIfNoCase User-Agent "^AdsBot-Google.*" bad_bot There's not really a good reason to block a specific Google bot If you make Curl requests to your server then comment this line out: SetEnvIfNoCase User-Agent "^Curl.*" bad_bot Reason: this will block all Curl requests to your server, including those by your own code. Be sure that you don't need Curl available if leaving this active. This is included in the list to prevent some types of website scrapers. If you want to leave this active and still need to use Curl, then consider changing your User Agent. Comment out: SetEnvIfNoCase User-Agent "^Mediapartners-Google.*" bad_bot Again, not necessary to block Google's bots, might even be a bad idea for SEO or exposure (only they know, right?). Testing There's no such thing as too much testing. These directives are powerful and while written well, may have edge cases (like 'null' mentioned previously). There's no replacement for manual testing, specifically it would be a good idea to test any marketing UTMs or URLs with GET strings you may have out there just in case. For automated testing I use broken-link-checker which can be called from the terminal or as a JS module. I prefer this method to using some random site scanning service. This will detect both 404s and 403s by scanning every link on your page and getting a response which is useful for ensuring that your existing URLs have not been affected by your .htaccess directives. broken-link-checker recommendations: Consider rate limiting your requests using the --requests flag to set the number of concurrent requests. If you don't you could run into rate limits that your managed hosting company, CDN, or you (if you're like me) have built into your own server. This terminal app runs fast so if you have a lot of links or pages those requests can stack up quickly. Consider using the -e flag, at least initially while testing your directives. This excludes external URLs which will help your test complete faster and prevent any false positives if you have broken external links (which you can handle separately). Consider using the -g flag which switches the request to GET which is what browsers do. Shortcut, just copy and paste my command: blc https://www.yoursite.com -roegv --requests 5 If you have access to your Apache access log via a bash/terminal instance then you may consider watching that file for new 404/403 entries for a little bit. You can do this by navigating to the directory with your access log and executing the following command (switch out the name of your log as needed): tail apache.access.log -f | grep "404 " You may consider also checking for 403s by changing out the HTTP status in that command. "This seems excessive" I think this is good for every site and once you get it dialed in to your needs can be replicated to others. There's no downside to increasing the security and performance of your hosting server. Consider that any undesirable traffic you block frees up resources for good traffic, and of course reduces your attack surface. If you need to think about scalability then this becomes even more important. The company I work for is looking to expand into 2 additional regions and I'd prefer my server was ready for it! If you get into high traffic circumstances then blocking this traffic may prevent you from needing to "throw money at the problem" by upgrading server specs if your server is running slower. Outside of that, it's just cool knowing that you have a deeper understanding of how this works and knowing you've expanded your developer expertise further. This isn't meant to be an exhaustive guide but I hope I've helped some people get some extra knowledge and save everyone a few hours on Google looking this up. If I've missed anything or presented inaccurate/incomplete information please let me know and I will update this comment to make it better.
×
×
  • Create New...