Jump to content

Mikel

Members
  • Posts

    124
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by Mikel

  1. Sorry, @matjazp, that's a genuine data-loss bug on Windows, on us. Root cause: buildLocalFileMap was using SplFileInfo::getPathname() which returns backslash paths on Windows, while the remote tree uses forward slashes. So every subdirectory file looked simultaneously "new" (not in local) AND "deleted" (not in remote) because the key comparison failed. The ZIP path then wrote 980 files and the delete pass immediately removed the same 980 files via their backslash variant — which on NTFS resolves to the same file. That's exactly the "6 files left in root" pattern you saw. Fix is now in branch refactor-install-from-Github: relative paths are normalized to forward slashes in buildLocalFileMap regardless of OS. A few things to double-check before you retry: Sync GitSync branch "refactor-install-from-Github" to latest commit 0f1425e Make sure your manually-restored TracyDebugger matches the upstream master (which it should if you grabbed it from the GH repo directly) Run sync — log should show something like 0 to update, 0 to delete (134 preserved via .gitignore/.gitattributes) and "up to date" If your restore was from git clone rather than the GitHub ZIP, your local will have .gitattributes and docs/ present — those are now matched by the .gitattributes export-ignore filter, so they'll be preserved rather than deleted. Thanks for the precise log output — without the file-count math the Windows-only nature wouldn't have been obvious. Linux/macOS users were unaffected because SplFileInfo::getPathname() already returns forward slashes there. Cheers, Mike
  2. Hi, @maximus, since we do not use OpenAI products (company policy) I haven´t tried Codex at all. But nevertheless, that workflow works — GitSync reacts to any git push, regardless of whether the commit comes from a human, CI, or an agent like Codex Cloud. @matjazp: Thanks for the detailed log — that pinpointed the problem clearly. Three fixes are now in main: PHP execution timeout lifted during sync (set_time_limit(0)). The 500 you hit was almost certainly PHP killing the script at 30–60 s, not GitHub timing out. Large diffs now use the ZIP archive endpoint. When more than 50 files need updating, GitSync fetches a single ZIP of the branch via /zipball/{ref} and copies from the extracted archive instead of hitting /git/blobs/{sha} once per file. Your 1115-file diff turns from ~1115 sequential HTTPS requests into 1 download plus local copies. GitSync now respects two sources at once: .gitignore (which spares Tracy's runtime files like logs/, dumps/, bluescreen/) and .gitattributes export-ignore (which keeps maintainer-excluded files like docs/ and .gitattributes itself off the live install). Both apply bidirectionally — files matching either are never added by the sync and never deleted from local. End result for TracyDebugger: install + first sync = "up to date" with zero unnecessary churn. Please switch to the GitSync branch refactor-install-from-Github in GitSync and give it another try — the log line for the sync should now show N to update, M to delete (P preserved via .gitignore/.gitattributes) – both N and M considerably lower than your original 1115/4430, and you'll also see Using ZIP archive for N file update(s) whenever the update count is above the 50-file threshold. We will merge the branch into main after some internal testing, so feedback is welcome 😉 Cheers, Mike
  3. Hi, @matjazp thanks for the report! We fixed it by removing the 3 calls. Just upgrade the module to 0.2.1 They solve different problems. ProcessWireUpgrade is for stable releases, GitSync is for branch-based development workflows. ProcessWireUpgrade pulls from the official modules.processwire.com directory and compares semantic version numbers. It can upgrade the ProcessWire core itself (master or dev branch) and existing installed modules, but it cannot install new modules that aren't already present, doesn't support private repositories, doesn't support arbitrary branches per module, and uses a pull model (no webhook / no auto-sync on push). Each upgrade is a full download. GitSync pulls from any GitHub repository — public or private (using a fine-grained Personal Access Token for the latter). It works at the branch and commit level rather than the release level, lets you switch any linked module to any branch, and detects changes by comparing git blob SHAs file-by-file, so only modified files are downloaded. It can install brand-new modules from a GitHub URL (even ones not listed in the official directory), supports private repos, and offers GitHub webhook integration for automatic sync on every push. It does not upgrade the ProcessWire core. When to use ProcessWireUpgrade: Production servers that should only move to officially released versions Upgrading the ProcessWire core itself Mostly relying on modules from the official directory When to use GitSync: Test/staging servers that should track a development branch (e.g. develop, feature-x) live Deploying your own modules from private repositories without FTP Installing GitHub-hosted modules that aren't (yet) in the official directory Auto-deploy on every git push via webhook They can be combined: use ProcessWireUpgrade for the core, and GitSync for modules that you develop yourself or that aren´t in the PW directory. To narrow this down — could you share: Which action failed? Install from GitHub, Link Module, or Sync/Upgrade of an already-linked TracyDebugger? The error or behavior you saw — blank page, timeout, rate-limit message, partial sync, etc. The last lines of the gitsync log under Setup > Logs > gitsync. Whether you have a GitHub Personal Access Token configured (without one you're capped at 60 API requests/hour). The file count alone (~1,250) shouldn't be a problem for a normal upgrade — only changed files are downloaded. But it would be a problem for a fresh Install from GitHub, where every file is fetched in its own API call. The log will tell us which case you hit. We just tested and ran into zero problems: We installed Tracy via the Modules page added it to GitSync via the dropdown synced master branch One known gotcha worth checking: TracyDebugger writes runtime files (logs, bluescreens, dumps) into its own module directory. GitSync deletes local files that don't exist in the remote repo, so a large toDelete list with permission-protected files could also cause the upgrade to fail mid-way. Cheers, Mike
  4. Update – Auto-Sync for third party modules – v0.2.0 Hi, folks, a new feature for the GitSync module: for public third-party modules where you can't add a webhook on the source repo, GitSync can now check for upstream updates automatically. How it works Every linked module has a new per-row "Auto-Sync" setting in the GitSync overview: off – manual sync only (default) notify – on the first admin page load per session, GitSync queries GitHub for the tracked branch. If a newer commit exists, a warning notice appears with a direct link to the branches view: auto-sync – same check, but performs the sync immediately without confirmation. Once a remote update has been detected, an orange "update available" badge stays on the GitSync overview next to the affected module – so the info is still there after dismissing the notice. Throttling and scope Checks run once per session (right after admin login), not on every page load. Webhook-active mappings are skipped entirely. GitSync itself is excluded from auto-sync – self-updates remain manual. Why we built it Webhooks are the cleanest path for repos you own. For public modules from other authors you don't control, you'd previously have to remember to check for updates manually. Now the module nudges you on login (or syncs straight away if you trust the upstream). Feedback welcome, Cheers, Mike
  5. Thank you very much for adressing this, @Roych! Would it be possible that you release the module either here in the Modules Section and/or on Github? Either way it would simplify the upgrade process of your wonderful module to a one-click action (ProcessUpgrades) or fully automatic (GitSync). Cheers, Mike
  6. StripePaymentLinks 1.2.0 — Electronic withdrawal for B2C distance contracts Hi all, just shipped new version of StripePaymentLinks, which adds a complete electronic right-of-withdrawal flow for online merchants. Why now EU Directive 2023/2673 (amending the Consumer Rights Directive 2011/83/EU) requires every online trader selling to EU consumers — goods, services, or digital products — to provide an easily accessible "withdrawal button" by 19 June 2026. Member states had to transpose the directive into national law by 19 December 2025; Austria's FAGG amendment is one of the implementations now landing. The guiding principle: withdrawing from a contract must not be more burdensome than concluding it. This module ships an electronic withdrawal flow that satisfies that principle, with all legal wording editable per site so it works under any national transposition. What's new Withdrawal modal on every frontend page — Bootstrap 5 modal with form → confirmation → success steps. Renders only when invoked (link or ?withdraw=1 URL parameter); zero overhead otherwise. Per-user withdrawal log — every submission is stored in a repeater on the user template (spl_withdrawals) with timestamp, products affected, salted HMAC-SHA256 IP hash, and a status field that the merchant can update from the backend. Two-mail flow — the customer gets a confirmation of the withdrawal, the merchant gets an internal notification with a direct link to the user profile. Both go through the existing universal mail layout. Order-confirmation mail now includes a consumer-rights block — auto-injected for every purchase, with two outcomes depending on the product: Right of withdrawal applies → withdrawal instructions are rendered. Right has been waived (e.g. immediate digital delivery with explicit consent) → waiver acknowledgement is rendered. Site-editable legal text — two TinyMCE config fields (mailWithdrawalText, mailWaiverText) let each site operator write the exact legal wording their jurisdiction requires. Placeholder system including {products}, {provider}, {contact_email}, {order_id}, {order_date}, {name}, {email}, {today} and — for the TinyMCE editor — anchor-pair placeholders that survive the editor's href-stripping: {withdrawal_mail}TEXT{withdrawal_mail_end} → expands to a prefilled mailto: link (subject + body from translatable defaults) {withdrawal_online}TEXT{withdrawal_online_end} → expands to a ?withdraw=1 deeplink that auto-opens the modal Built-in protection — honeypot field, per-IP rate-limit, server-side CSRF, deliverability headers (Auto-Submitted, X-Auto-Response-Suppress, proper Reply-To). Setup is a few field saves ... In the module config there's a new Withdrawal fieldset with five fields: Internal notification email (falls back to $config->adminEmail) Contact email shown in the form (falls back to sender email) Privacy policy page (page-select) Withdrawal text — right of withdrawal applies (TinyMCE) Waiver text — right of withdrawal does not apply (TinyMCE) The module creates all required fields, the spl_withdrawals repeater and status options on first save / module upgrade — fully idempotent. ... and adding the trigger link to your templates The withdrawal modal is auto-injected on every frontend page — you only need to render a link to open it. The module ships a helper for that: echo $modules->get('StripePaymentLinks')->renderWithdrawalLink(); That gives you the legally required, always-visible withdrawal entry point. Typical place: site footer or account menu. The helper takes two optional arguments — a CSS class and a custom label — so it adapts to whatever markup your theme uses: echo $modules->get('StripePaymentLinks')->renderWithdrawalLink('nav-link fw-bold'); echo $modules->get('StripePaymentLinks')->renderWithdrawalLink('', 'WITHDRAW'); Anything that links to your site root with ?withdraw=1 also auto-opens the modal — useful for putting a direct withdrawal link inside order-confirmation mails, transactional notifications, or PDF receipts. That's the whole frontend integration — one helper call, no JavaScript wiring, no modal HTML in your templates. Translatable All UI strings (modal labels, mail subjects/bodies, status names) are PW-translatable. Happy to hear feedback, edge cases, or implementation experiences from other EU jurisdictions. Cheers, Mike
  7. Hi, @Stefanowitsch, I´m curious: Do you hook into a PrivacyWire method or do you use its built in custom function trigger? We simply have set up a script that checks the local storage for the privacywire key. Then a cookie for Native Analytics is set or unset depending on its "statistics" value. Cheers, Mike
  8. Hi Peter, great to see someone tackling this. SeoMaestro is solidly built and still maintained for bugfixes, but the last feature release was June 2022 and Wanze himself has mentioned he's stepped back from active PW work, if I remember correctly. A few things that come up in client work aren't covered by it alone. I showed your announcement to our SEO specialist and asked him to put together a wishlist. We then discussed it internally and stress-tested every point. What kept coming up wasn't really "we need new features" – it was "the pieces exist, but they don't talk to each other". There's already a lot of good, actively maintained tooling in the ecosystem: Wire Request Blocker (Ryan) – AI bot throttling since September 2025 ProcessRedirects (apeisa / teppokoivula) – 301s, wildcards, CSV import/export, v2.2.5 released Dec 2025 Process404Logger (kixe) – clean 404 logging SeoMaestro (Wanze) – the meta/OG/sitemap foundation everyone already uses The actual pain in daily work is that these live as separate islands. A site owner has to install four modules and configure each one in its own admin section. The obvious workflow between them doesn't exist either – a 404 logged by Process404Logger doesn't surface in ProcessRedirects as a redirect suggestion, even though that's exactly the kind of pairing that would save real time. So the honest question for SEO NEO might not be "what new features do we need" but rather: could SEO NEO act as the umbrella that connects what's already there? A central admin section that surfaces: SEO health (missing descriptions, duplicate titles, noindex flags) as a Lister-based audit view – this genuinely doesn't exist in the ecosystem yet 404 hotspots from the logger with a "create redirect" action wired into ProcessRedirects AI crawler activity from Wire Request Blocker SeoMaestro field status across templates Plus the few things that are genuinely missing on the meta-handling side: Native urlSegments support – as psy mentioned earlier in the thread, currently needs a hook in SeoMaestro Schema.org helpers with documented hooks – ready-made generators for the common types (Article, FAQPage, Person, Organization, BreadcrumbList) that developers can call from templates. Not auto-detection (that doesn't work without explicit mapping), but a clean API. What we deliberately left off the list: llms.txt generator – recent log file audits show GPTBot, ClaudeBot and PerplexityBot don't actually fetch the file. The spec is unofficial and no LLM lab has committed to honoring it. Worth revisiting if that changes. Yoast-style content analysis with traffic-light scoring – tends to produce text optimized for the algorithm rather than the reader. Whether the right path is one big new module or a coordination layer on top of the existing ones is your call. But from the user side, the bigger win would be coherence rather than yet another standalone tool. Looking forward to seeing where this goes. Cheers, Mike
  9. Hi all, a small confession from the frameless corner of the PW universe: in the last 15 years we've spent way too many evenings doing the same FTP-shuffle on shared hosting. Delete everything in site/, drop the DB via phpMyAdmin, re-upload, run install.php, log back in, find out the bug we were chasing only reproduces after a reset, sigh, repeat. The reason we do this on real hosting at all is that the gnarly bugs in modules-under-development never show up locally — AllowOverride, mixed file ownership, mod_security, you know the drill. But "let's test cleanly on the real server" and "no SSH access" don't combine well. So we built ProcessWireReset: a module that wipes a PW install back to clean profile state from inside the admin. No SSH, no FTP, no phpMyAdmin. Click the button, log back in, you're at a freshly installed PW with your superuser intact and any modules you marked as keep re-installed automatically. A few things worth knowing, since destructive modules deserve some care: Modules to keep + Directories to keep. Two fields in the config: one picks which modules survive (transitive dependencies included), the other is a free-form list of paths under site/ that should be spared by the cleanup — handy for things like templates/RockIcons or assets/backups that live outside the module directories. Custom tables go into a snapshot. After the reset you can pick which module-specific tables to restore. Auto-restoring everything turned out to fight with re-installed schemas more often than we liked. The reset can crash mid-way — a kept module's install() can fatal in surprising ways. The confirmation modal hands you a one-time recovery URL with a 256-bit token. If the worst happens, that URL gives you a clean reinstall with your original credentials. Belt, braces, and one extra strap. It's interactive only. No cron triggers, no CI hooks. The destructive button has a real human in front of it, on purpose. Pairs nicely with GitSync: If you're already using our GitSync module, ProcessWireReset is the missing other half. GitSync pulls a fresh module version from your GitHub repo into the live install at the click of a button — but it doesn't touch the DB or re-run install(). After a GitSync pull that changed schemas, fields, or admin pages, the previous install state and the new code drift apart. Hit Reset, the module is removed and re-installed cleanly from the freshly pulled code, and you're testing what you actually shipped instead of a frankenstein of old DB state and new files. That GitSync → Reset → test loop is what we use daily on shared-hosting test installs where SSH isn't an option. Repo (MIT): https://github.com/frameless-at/ProcessWireReset Modules Directory: https://processwire.com/modules/process-wire-reset Caveat the obvious: this thing is for development, not for production. Treat it accordingly. Curious to hear what you build/break with it. Bug reports and pull requests welcome. Cheers, Mike
      • 9
      • Like
      • Thanks
  10. Update: Improved "Link Module" UX We had an internal discussion about the "Link Module" interface and optimized how the different states are handled: Match found – The repo is resolved and ready to link. The green link opens the repository on GitHub so you can verify it's the right one before linking. This appears instantly when the module declares its GitHub URL in getModuleInfo() or has been resolved before (cached), otherwise after a quick GitHub search. No repo found – ProModules like RepeaterMatrix have no public GitHub repo. GitSync shows a clear "No repositories found." instead of false matches. Multiple repos – When a module exists in several repos (forks, different maintainers), you get a list to pick from. Selected, with "change" – After picking one, a "change" link lets you switch. It only appears when there are actually alternatives. Other improvements: single results are now auto-selected (no unnecessary click), the GitHub search uses Code Search API for exact .module.php filename matching (works even when repo name ≠ class name), and results are cached client-side so re-selecting a module is instant. Cheers, Mike
  11. Hey folks, we at frameless Media often develop across multiple devices – laptop, tablet, sometimes even from a phone with an AI coding assistant. Git is our single source of truth, but getting those changes onto a staging or production server has always been annoying. Especially on shared hosting where there's no SSH, no git, and git-based FTP via YAML configs is more hassle than it's worth. We also frequently need to test new modules directly on shared hosting environments where the server setup differs from our local machines. Manually uploading files after every push? No thanks. So we built GitSync. 🎯 TL;DR: ✅ Link any installed module to its GitHub repo ✅ See all branches and their latest commits ✅ One-click sync – only changed files are downloaded ✅ GitHub Webhook support – auto-sync on every push ✅ Works on shared hosting – no git, no SSH, no cron ✅ Private repo support via GitHub Token What's the difference to ProcessUpgrade? ProcessUpgrade is great for updating published modules from the PW modules directory. But it tracks releases, not branches. During development, when you're pushing to `develop` or `feature/xyz` ten times a day, you need something different. That's where GitSync comes in. 🚀 How it works Install the module, add your GitHub Token (optional for public repos) Go to GitSync > Add Module, pick any installed module from the dropdown GitSync searches GitHub for matching repositories automatically Link the module to a repo + branch – done From now on, you can sync with one click. GitSync compares file hashes locally and remotely (using the same SHA1 blob hashing that git uses internally) and only downloads what actually changed. No full re-downloads, minimal API usage. Want it fully automatic? Set up a GitHub Webhook – enter a secret in the module config, point the webhook to `https://yoursite.com/gitsync-webhook/`, and every push triggers an automatic sync. The module overview shows a ⚡ webhook badge on auto-synced modules so you always know what's wired up. The real power: remote development with AI 📱 You're on the train, phone in hand, chatting with Claude via the Claude app. Claude writes code, commits to a feature branch on GitHub. GitSync picks up the webhook and syncs the module to your dev server. Automatically. You open the edited webpage on your phone, check the result, give feedback, iterate. The entire development loop without ever opening a laptop. 🤯 This works just as well for teams: multiple developers push to GitHub from different machines, and the staging server always reflects the latest state – no manual deploys, no SSH sessions, no FTP. We've been using a prototype internally for a few weeks now and it's become part of our daily workflow – especially the webhook auto-sync is something we don't want to miss anymore. As proof of concept we built the public release entirely as described above 😃 Technical details for the curious The differential sync works like GIT itself: every file's content is hashed as `sha1("blob {size}\0{content}")`. GitHub's Trees API returns these hashes for the entire branch in a single request. GitSync computes the same hash locally. Matching hash = identical file = skip. Requirements ProcessWire >= 3.0 and PHP >= 7.4 with cURL Module and Docs 👉 GitHub: https://github.com/frameless-at/GitSync 👉 Module Directory: https://processwire.com/modules/git-sync/ Would love to hear your thoughts, ideas, and edge cases we might not have considered! Cheers, Mike
  12. Hey everyone, on a recent client project we had to deal with a large number of Markdown files that needed to end up as regular HTML content on ProcessWire pages. Converting them manually or piping them through external tools wasn't an option – too many files, too tedious, and the content had to be stored as actual HTML in rich textfields, not just formatted at runtime. So we built a small module that handles this directly inside ProcessWire. How it works The module creates a file upload field (md_import_files) and a Repeater field (md_import_items) with a standard title field and a richtext body field (md_import_body) inside. The body field automatically uses TinyMCE if installed, otherwise CKEditor. You add both fields (md_import_files,md_import_items) to any template, upload your .md files, hit save – each file gets converted to HTML via PW's core TextformatterMarkdownExtra and stored as a separate Repeater item. The source filename goes into the items title, processed files are removed from the upload automatically. Template output The Repeater items are regular PW pages, so output is straightforward: foreach ($page->md_import_items as $item) { echo "<section>"; echo "<h2>{$item->title}</h2>"; echo "<div>{$item->md_import_body}</div>"; echo "</section>"; } Tag mappings One thing we needed right away: control over how certain Markdown elements end up in HTML. For example, #headings in Markdown become <h1> – but on most websites <h1> is reserved for the page title. The module has a simple config (Modules → Configure → Markdown Importer) where you define tag mappings, one per line: h1:h2 h2:h3 strong:b blockquote:aside hr:br This performs a simple 1:1 tag replacement after conversion, preserving all attributes. Works well for standalone or equivalent elements like headings, inline formatting, blockquotes, or void elements like hr:br. Note that it doesn't handle nested structures – mapping table:ul for example would only replace the outer <table> tag while leaving thead, tr, td etc. untouched. Requirements ProcessWire 3.0.0+ FieldtypeRepeater (core) TextformatterMarkdownExtra (core) GitHub: github.com/frameless-at/MarkdownImporter Modules Directory: https://processwire.com/modules/markdown-importer/ Happy to hear if anyone finds this useful or has suggestions for improvements. Cheers, Mike
      • 6
      • Like
      • Thanks
  13. Hi, everyone! While working on a client project we were looking for a way to let editors apply CSS classes to individual images in rich text fields — quickly, visually, and also in the frontend editor. ProcessWire already has several ways to get CSS classes onto images, so it's worth being precise about what this module does differently: TextformatterFluidImages adds one class to all images automatically — great for img-fluid across the board, but there's no per-image choice. TextformatterImageInterceptor is more powerful: editors tag images in the image field, and the Textformatter applies the corresponding classes at render time. The logic is developer-defined and centralized, which is exactly right when you want consistent, rule-based image treatment. But the class is invisible in the editor, applied only in the frontend output, and editors have to set the tag in a completely separate place from where they're actually working. TinyMCE's built-in styleFormatsCSS is the closest thing to what we wanted. You write CSS, ProcessWire turns it into a Styles dropdown. It works, but the dropdown is generic — it shows all defined styles regardless of what's selected — and there's a known accumulation issue where nothing prevents float-left float-right ending up on the same image. And it doesn't work in the frontend editor. What we needed was simpler: editor clicks an image, picks a style, sees immediately which styles are active, can combine them or remove them individually. No dialogs, no separate fields, no render-time magic — the class goes directly into the <img> tag in the saved HTML, visible and editable right there in the editor. That's what this module does: It registers a context toolbar in TinyMCE that appears as a floating "Image Style" button when an image is selected. For CKEditor the same options show up in the right-click context menu. The class list is defined once in the module settings and works across both editors — no separate configuration per editor type. Each entry shows a checkmark when active, clicking it again removes it, multiple classes can be combined freely. Works in the admin and in the frontend editor. Complete Readme on GitHub: https://github.com/frameless-at/ProcessImageClasses and the module directory. Any thoughts on further improvements welcome! Cheers, Mike
  14. [Update] v1.0.25 – Multi-Email Account Merge A small but handy addition for real-world scenarios: customers sometimes purchase with different email addresses and end up with split accounts. The new Merge User Accounts tool in the module config lets you consolidate them in seconds. You enter the source email (the old/unwanted account) and the target email (the one the customer wants to keep). The module transfers all purchases from the source to the target and permanently deletes the source account. Before committing, you can run it in test mode to see exactly what would be transferred – no changes are written. Once you're confident, check "Merge now", save, and the merge report confirms what happened. One thing worth noting: the source account is permanently deleted after the merge – so double-check in test mode first. 🙂 Feedback welcome!
  15. Hi Thomas, @chuckymendoza, thanks for thinking this through so carefully — and yes, the bottle analogy is growing on me. 😄 That said, I'd still recommend not rewriting the module for this use case — not because it's technically impossible, but because the wheel has already been invented, and quite well: Option 1 — FormBuilder + Stripe Processor (by @ryan) Ryan's Stripe Processor Action for FormBuilder supports customizable line items via ProcessWire hooks out of the box. You could build a font-family selector form, populate the line items dynamically, and send the whole bundle to Stripe Checkout in one session — exactly what you described. It's not a cart system, but for a single-family "pick your styles" page it could be a very clean fit. Option 2 — RockCommerce (by @bernhard) Released in late 2024, RockCommerce is a modern, lightweight e-commerce module built "the ProcessWire way." It supports cart, checkout, product variations, coupons, payment webhooks, and pluggable payment providers — without the bloat of a full enterprise shop system. Option 3 — Padloper / ProcessWire Commerce (by @kongondo) If you need the most mature, battle-tested solution with a long track record, ProcessWire Commerce is the established open-source choice. Full cart, order management, variants, webhooks — all there. Rolling your own Stripe Checkout Session integration inside this module would mean maintaining webhook handling, session state, download token security, order persistence — all things that are already solved in the tools above. The maintenance burden alone usually kills these side projects. My honest take: for your font-family use case, RockCommerce or FormBuilder + Stripe Processor are the most promising starting points. For anything larger in scope, ProcessWire Commerce has you covered. Cheers, Mike
  16. [Update] Free Access Display Support (v0.1.7) Hi everyone! Following the latest release of StripePaymentLinksv1.0.23, we've updated this module to match the new free product access — so manually granted product access now shows up properly in all portal views. When a user has products assigned via spl_free_access, those products appear in the portal almost like purchased ones: Grid view: Free-access products render as full active cards instead of grayed-out "not yet purchased" ones. Table view: Free-access rows get a cyan "Free access" badge in the status column. The date column shows a dash (—) since there's no purchase timestamp involved. This makes it immediately clear which products came through Stripe and which were granted manually. Cheers, Mike
  17. [Update] New Feature: Free Product Access (v 1.0.23) Hi, everyone! We just released v1.0.23 with a new feature based on a real client use case: A client using StripePaymentLinks wanted to grant certain high-value customers bonus access to additional products — no Stripe transaction involved, just a manual override. The solution: a new spl_free_access field (AsmSelect, multi-page) on the user template. Pages selectable in the dropdown are automatically restricted to the configured product templates, so editors only see relevant products — every user in the backend gets a "Free Product Access" field where additional products can be assigned directly. Access granted this way is recognized by hasActiveAccess() — the same internal check SPL uses for all content gating — so it integrates transparently with the existing module logic. The field is created and updated automatically by ensureUserFields(), including template restriction syncing when the product template config changes. Thanks for the request and feedback that led to this improvement! Cheers, Mike
  18. Hi, @chuckymendoza aka Thomas 😉 Sorry for the delay, we´ve been busy and I did not get to read the forum in the past weeks. To put it short: For your use case I would definitely NOT use the module. What you need is some kind of shop system because you want your customers to be able to buy MORE than 1 font at once, don´t you? This is something the module explicitly is NOT made for. So yeah, maybe you can hang a picture with a nail and a bottle, but I'd still recommend the hammer. 😉 Cheers, Mike
  19. We use the StripePaymentLinks module for all projects where a few products or services are sold on landing pages. Typically, these are NOT shops, and do not require a shopping basket. Cross-selling can be implemented directly with Stripe, and the module also covers this. For shop functionality, when needed, we use whatever fits best. RockCommerce, for example, is elegant and can be set up in no time. So you just need to ask yourself: Do I want my customers to be able to buy more than ONE product at the SAME TIME? Cheers, Mike
  20. RockCommerce from @bernhardcould also be a choice: https://www.baumrock.com/en/processwire/modules/rockcommerce/ It's open source meanwhile and has all features of a shop including cart and checkout by external payment provider. Out of the box it’s Mollie, but since it’s an interface class, you can add any other easily. We tested this module a while ago and it was really nice working with it 👍 Also open source since a while is the well known ProcessWire Commerce (formerly “Padloper”) by @kongondo Because it was mentioned: StripePaymentLinks is not recommended when you need cart functionality. Cheers, Mike
  21. Hi, @Pavel Radvan, thanks for reporting, that was just because of a wrong version number in the modules info.json file. We fixed it, so when you do the update now, it shows the correct version number. cheers, Mike
  22. Hi, @eelkenet, thanks for the findings! We updated the module to version 1.1.2 that addresses all 4 points: Specific error messages for upload_max_filesize, UPLOAD_ERR_PARTIAL, etc. TheMappingEngine now has multiple fallback strategies title field selection via Dropdown in UI for manual selection All parsers now use max_rows for data limiting Good luck with WP import, that one can be a bit tricky 😉 due to WPs "unique" structure. Cheers, Mike
  23. Thanks, Ivan. Screenshots are included in the Readme file on Github and the module directory (when module is approved). Regarding Repeater (Matrix) Fields: As the module handles external (non ProcessWire) data structures, what exactly do you mean by „handled“? It is not possible to map data to an existing template/field structure, if that´s what you are asking for.
  24. Hey, everyone, here at frameless we frequently work with clients who already have a website but aren't happy with it and want us to rebuild it from scratch. Whenever possible, we use ProcessWire for new web projects – no surprise there, given the flexibility and clean API we all love. For smaller sites, migrating content is usually straightforward – a bit of copy/paste and you're done. But for larger projects with hundreds or thousands of records across multiple database tables, this quickly becomes tedious and error-prone. Over the years, we've written various import scripts and parsers to handle these migrations. We finally decided to clean them up and package everything into a proper module that we'd like to share with the community. Introducing: Data Migrator Data Migrator is a Process module that imports external data (SQL dumps, CSV, JSON, XML) directly into ProcessWire's page structure – including automatic creation of templates, fields, and even PHP template files. Key Features Multi-format support – Import from .sql, .csv, .json, and .xml files Automatic type detection – Recognizes emails, URLs, dates, booleans, integers, etc. and maps them to appropriate ProcessWire fieldtypes SQL schema parsing – Extracts column types from CREATE TABLE statements for better field mapping Foreign Key handling – Detects FK relationships and sorts tables by dependency order Dry Run mode – Preview exactly what will be created before committing anything Full Rollback – Undo an entire migration with one click (removes all created pages, templates, and fields) Template file generation – Automatically creates ready-to-use .php template files in /site/templates/ How it works Upload your data file (SQL dump, CSV, JSON, or XML) Review the analysis – the module shows detected tables, columns, suggested fieldtypes, and sample values Fine-tune if needed – override fieldtypes via dropdown, configure FK relationships Run a Dry Run to preview all changes Execute the migration – templates, fields, parent pages, and data pages are created automatically If something's wrong – hit Rollback to cleanly undo everything Requirements ProcessWire 3.0.0+ PHP 7.4+ Links GitHub: github.com/frameless-at/ProcessDataMigrator Modules Directory: /modules/process-data-migrator/ We've been using the methods and classes bundled in this module internally for a while now and it has saved us a lot of time on migration projects. We hope it's useful for others facing similar challenges. Feedback, bug reports, and feature requests are welcome! Cheers, Mike
  25. Hey everyone! After the StripePaymentLinks module has been running smoothly, a few customers with multiple Stripe accounts asked for better analytics capabilities. The Stripe dashboard is okay, but when you have multiple accounts and need specific analysis, it quickly becomes tedious. StripePlAdmin is an admin interface that displays the data stored by StripePaymentLinks in three perspectives: Purchases: All transactions with customer details, subscription status, renewals Products: Aggregated product performance (revenue, purchases, quantities) Customers: Customer lifetime value, purchase behavior Features: Configurable columns per tab Dynamic filters (Boolean search, date ranges, number ranges) Clickable product/customer names open detail modals CSV export with active filters Summary totals at table footer You can show/hide columns and filters in the module settings as needed. Everything is very flexible. Available on GitHub and in the Modules directory. Feedback welcome! 🚀 Cheers, Mike
      • 9
      • Like
      • Thanks
×
×
  • Create New...