Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 03/02/2016 in all areas

  1. Migrator can definitely do this sort of stuff. It hasn't seen much love of late (although I did just commit a quick patch for a bunch of Notices that I had been ignoring - Tracy made them too hard to ignore). Anyway, I still use the module regularly in my own development and rarely have any problems. I can't imagine working without it when setting up new sections of content for an already live site - just build the fields, templates, and pages in dev and then export/import and you're done. I do always test the import on another dev setup before importing to production as a check. That said I know there are still some issues to be dealt with. However, I have tested it successfully with a multi-language setup and with all the Profields (except Matrix Repeaters) and normal repeaters. I still want to get back to sorting out the last of the bugs, but some have been hard to reproduce. I would be curious to see if it works well for your needs.
    6 points
  2. @Ovi_S we run on a ServInt server here too, and actually so do all my clients… and pretty much every site I work with, for the last 10 years. I can't say I've ever observed the issue you've mentioned. So I doubt it's a ServInt issue, and instead must be related to something about the server configuration. For instance, none of the servers I use at ServInt have suhosin or suphp active. Now I doubt that suphp would be an issue, but suhosin–maybe. It would be worth trying to disable that at least temporarily if you can. Since the issue you've described seems a little random and no errors are involved, it has that sound of something interfering with the normal requests, which I think suhosin might do. Based on available info so far, I'm inclined to think this is the first thing that should be looked at. I'm sure you've tried this, but does using a different browser make any difference? If you look in your /site/config.php file, what settings are you using for $config->chmodFile and $config->chmodDir? Are you able to duplicate the issue on completely stock installation of PW 2.7.x or 3.0.x using one of the default profiles? (with no 3rd party modules installed). If so, I'm happy to take a look at it if you don't mind PM'ing me the PW admin login and FTP (or SSH) login to the installation. I would also need instructions on exactly what steps to take in order to reproduce the issue.
    5 points
  3. I've found with the field/template import/export, it is sometimes necessary to do it twice. Once to create the entries (skipping the relationships/family settings), and the second time to apply the correct family settings.
    3 points
  4. You probably shouldn't store site content inside the Admin folder, as there are various pages, which are hidden, not viewable by non-superuser and such things. So you'd need to use include=hidden or check_access=0 just to make things work. I'm personally not a fan of this, as any errors made in selectors could lead to being a security hole. I'd rather use a page-field to link users to orders and store the orders under a different parent outside of the admin branch.
    3 points
  5. The system that's proven more effective for me is: 1 — do all the non destructive database changes on the live site (adding fields and adding them to templates, creating new templates and so on) 2 — import the database to the development server and do the templates changes 3 — pull the new files to the live site without touching the database 4 — do some cleaning on the live database if needed (remove fields, templates and so on) Another tip, although this one doesn't really answer your question, just because I think it can be useful One system that I already used effectively was to create another install of PW (a copy of the other) in the live server (on a subdirectory or subdomain) that connects to the same database, and develop all the changes on the template files right there, using real and up-to-date content. Then I simply replace the template folder by the new one.
    3 points
  6. A very honest write up by our very own André (@onjegolders) http://milktop.co.uk/articles/picking-the-right-cms-for-the-job I already told him about the built-in frontend editing on 3.0 and the Pro Drafts live editing
    3 points
  7. Workers are separate processes, which run alongside your webserver stack on the host system. They can process things they get out of the queue without being reliant on the Request->Process->Response cycle of the webserver. E.g. You want to have a way for users to request a big report on your website. As a big report sounds like, it take some time to generate the pdf for it (say ~1 Min). You wouldn't want your user to sit around and watch the nice browser spinner, waiting for the next page to pop up. You can now write the request in a queue and instantly return a message stating that the report will be generated and emailed to the user when ready. To generate that report you may know the option of using a cron to trigger website logic, without an actual user requesting a website, but that's also flawed with things like timeouts and such, also it's only starting in intervals. That's where workers come in. These are php (and other script) processes, which often run as services like e.g. your apache server does. These scripts contain some way to not exit early (e.g. infinite while loop) – which is the biggest difference to website request – and periodically check their queue if there's work to do. If your report request is in it, they generate the pdf, email it and if done delete the queue item. If there are other reports to do they'll move on to them, otherwise they sleep until another queue item is available.
    3 points
  8. Thanks for the detailed answer. Yes, the external database (however not MySQL) is not build on PW. Yes, the URL is served by PW. Now I know how to proceed. What I needed was the word «url segment».
    3 points
  9. Hi nalply, and welcome to Processwire and the forums (even though it's your second post ) The external db is not build on Processwire? But the url "www.example.com/external-database/item/<id>" is served by processwire, right? In your template you want to get the item from the external db which was provided in the url (<id>)? So when I get you right you could use WireDatabasePDO like so $urlSegment1 = $sanitizer->int($input->urlSegment1); $customDb = new WireDatabasePDO('mysql:host=localhost;dbname=EXTERNAL_DATABASE_NAME', 'USER', 'PASS'); $query = $customDb->prepare('SELECT column FROM table WHERE id=:id'); $query->execute(array(':id' => $urlSegment1)); $result = $query->fetchAll(PDO::FETCH_OBJ); foreach ($result as $r) { echo $r->column; } So for this template you need to enable url segments in backend to get it working and of course you need to adjust the connection and the query. But thats basically what you put in the template file, e.g. calles "external-database" Saludos Can
    3 points
  10. On the source site, go to Setup > Fields, and click the [Export] button at the bottom of the page. Choose your new fields, then click [Export]. This will give you a chunk of JSON-encoded text which represents your chosen fields. Copy the text, and paste it into the same area on the live site - Setup > Fields > [import]. Then do the same for templates if necessary, which will pick up the changes to add those new fields to the relevant templates if they already exist, or create the templates if they don't.
    3 points
  11. Short answer because on mobile. Look on the docs for url segments
    3 points
  12. @Jason Huck: Have you already checked out adrian's ProcessMigrator module? https://github.com/adrianbj/ProcessMigrator Personally I never used it, but adrian seems to be always online if you need help. Maybe he never sleeps
    3 points
  13. Hi everyone, I have just set up the ability to customize the potocol handler for opening files in your editor directly to the line of error - this is potentially a huge timesaver The new config setting is "Editor Protocol Handler": It is initially configured for SublimeText because that is my editor and I know lots of you also use it. To make things work, grab this free subl:// protocol registering app: https://github.com/saetia/sublime-url-protocol-mac - note the instructions at the bottom if you need to make it work for ST2 instead of ST3. If you're not using ST, there are other ways to set up your own custom protocol handler, but I'l leave you guys to figure that out. Once you have done that you will be able to click on any of the links in errors, dumps, and barBumps and it will open the file in ST to the exact line. Note: the links are to the actual original files, not the PW 3.x compiled versions !
    3 points
  14. Just found a Suhosin setting that will definitely interfere with PW files fields: Note that 64 as the default setting for "max length" of variable names in POST requests. This appears to be ON by default with this value in Suhosin. That's going to cause issues all over the place in PW. If you have a files field with a name more than 18 characters, or any files field in a repeater, then Suhosin is going to block any data in it. It looks to me like Suhosin's default settings will potentially interfere with most of PW's multi-value inputs, not to mention some of Suhosin's other default rules will interfere with other parts of PW (and I'm guessing almost any powerful CMS other than perhaps WordPress).
    2 points
  15. Just looking at all the things Suhosin gets involved with (https://suhosin.org/stories/configuration.html), it seems likely to me something with your files field(s) are generating a false positive with Suhosin. For instance, maybe it doesn't like something about the property name used by file descriptions or some combination of factors. It looks like there is a whole lot of stuff that can be configured with Suhosin that could cause all sorts of problems with different web applications. So after looking closer at Suhosin I stand by what I said earlier of this probably being the first thing to look at. If you are using Chrome, it's definitely not the browser. I think the majority of us here are using the latest Chrome as well. These permissions are usually considered to be far too open. It's certainly possible suhosin is having a problem with this. Though since you are on a dedicated VPS, maybe it doesn't matter much for your situation, but perhaps it does to Suhosin. Since you are using suphp, you can actually lock those permissions down quite far... the files likely don't even need to be readable outside of the user account. Whereas, your current permissions have them globally readable and writable to everyone (even if everyone is just you, since it's a dedicated environment, it's still likely to trip security-related things like Suhosin). If disabling Suhosin doesn't do it, see if you can just find any installation producing the issue reliably on a core Images field, and we can take a look at that.
    2 points
  16. They look pretty good too and that announcement is really interesting from a business perspective. Sparkpost was among a few others I was investigating, but felt like the sudden change in pricing could as well be reversed and 100K free forever felt a bit too good to be true. The thing that really annoyed me from Mandrill was the fact that they pulled the rug beneath our feet in a really painful way, and that commitment from Sparkpost is very welcome. I will probably also implement their API, based on the code from the Mailgun module. It's slightly more complex but not by a big margin. Choice is good. Sending through the API is so much faster, SMTP just kills servers on large volumes!
    2 points
  17. For those interested in the Mailgun module, just letting you know it's ready to use. I'd appreciate you guys giving me a hand in testing things, let me know if you encounter weird encoding errors (it's been tested with Litmus and should be ok but you never know) or anything else. The only thing that you should know is that there is a requirement for PHP >5.5 only if you use the attachment functionality, as the cURL functions I'm using for files (CURLFiles class vs @) requires it, though that will be fixed in a newer version by the end of the week. Please file reports on the repo tracker, not on the forums, it's going to be much easier tracking things this way.
    2 points
  18. In this tutorial I will cover how to use clsource's REST Helper classes to create a RESTful API endpoint within a PW-powered site and how to connect to it from the outside world with a REST client. This is a quite lengthy tutorial. If you follow all along and make it through to the end, you should get both, a working REST API with ProcessWire and hopefully some more basic understanding how these APIs work. As always with PW, there are many ways you could do this. My way of implementing it and the code examples are loosely based on a real world project that I am working on. Also, this is the first tutorial I am writing, so please bear with me if my instructions and examples are not that clear to understand. And please let me know if something is missing or could be made more clear. The steps covered: create templates and pages in the PW backend to get an API endpoint (an URL where the API can be accessed at) copy and save the REST Helper classes to your site create a template file and write some logic to receive and process data through our endpoint and send data back to the REST client test the whole setup with a Browser REST Client Addon I will not go into fundamentals and technical details on how RESTful APis are supposed to work. I assume that you have already read up on that and have a basic understanding of the principles behind that technology. Some helpful resources to brush up your knowledge: https://en.wikipedia.org/wiki/Representational_state_transfer http://www.restapitutorial.com/lessons/whatisrest.html The complete pages.php template is attached to this post for copy/paste. Lets get started. 1. create templates and pages in the PW backend to get an API endpoint (an URL where the API can be accessed) First we need to create some templates and pages in the PW backend to make our REST API accessible from the outside world through an URL (API endpoint). In my example this URL will be: https://mysite.dev/api/pages/ Note the "https" part. While this is not mandatory, I strongly recommend having your API endpoint use the https protocol, for security reasons. Further down in step 3 we will use this URL to create new pages / update and get data of existing pages. Go to your PW test site admin and: create 2 new templates: one is called "api", the other one "pages". For a start, they both have only a title field assigned. Just create the templates. We will create the corresponding files later, when we need them. enable "allow URL segments" for the "pages" template. We will need this later to access data sent by the requests from the client. in the Files tab of the "pages" template check "Disable automatic append of file: _main.php" create a new page under the home page with title, name and template "api" and set it to hidden create a child page for the "api" page with title, name and template "pages" The pagetree should look somewhat like this: Ok, now we're all set up for creating our API endpoint. If you browse to https://mysite.dev/api/pages/ you will most likely get a 404 error or will be redirected to your home page as we do not have a template file yet for the "pages" template. We will add that later in step 3. 2. copy and save the REST Helper classes to your site I have the REST Helper class sitting in site/templates/inc/Rest.php and include it from there. You could save it in any other location within your site/templates folder. I forked clsource's original code to add basic HTTP authentication support. Click here to open my raw gist, copy the contents and save them to /site/templates/inc/Rest.php In the next step we will include this file to make the classes "Rest" and "Request" available to our template file. 3. create a template file and write some logic to receive and process data through our endpoint and send data back to the client This will be the longest and most complex part of the tutorial. But I will try to present it in small, easy to follow chunks. Since we access our API at https://mysite.dev/api/pages/, we need to create a template file called "pages.php" for our "pages" template and save it to /site/templates/pages.php. Go ahead and create this file (if you're lazy, copy the attached file). Now right at the top of pages.php, we start with <?php require_once "./inc/Rest.php"; to include the REST Helper classes. Next, we initialize some variables that we will keep using later on // set vars with the default output $statuscode = 200; $response = []; $header = Rest\Header::mimeType('json'); 3.1 retrieve data with a GET request Now that we have the basics set up, we will next create the code for handling the easiest request type, a GET request. With the GET request we will ask the API to return data for an existing page. To let the API know which page it should return data for, we need to send the page id along with our request. I am attaching the page id as an url segment to the API endpoint. So the URL that the client will use to retrieve data for a page will look like: https://mysite.dev/api/pages/1234 where 1234 is the unique page id. Add following code to pages.php // if we have an urlsegment and it is a numeric string we get data from or update an existing page: handle GET and PUT requests if($input->urlSegment1 && is_numeric($input->urlSegment1)) { $pageId = $input->urlSegment1; // GET request: get data from existing page if(Rest\Request::is('get')) { // get the page for given Id $p = $pages->get($pageId); if($p->id) { $pdata = ["id" => $pageId]; // array for storing page data with added page id $p->of(false); // set output formatting to false before retrieving page data // loop through the page fields and add their names and values to $pdata array foreach($p->template->fieldgroup as $field) { if($field->type instanceof FieldtypeFieldsetOpen) continue; $value = $p->get($field->name); $pdata[$field->name] = $field->type->sleepValue($p, $field, $value); } $response = $pdata; } else { //page does not exist $response["error"] = "The page does not exist"; $statuscode = 404; // Not Found (see /site/templates/inc/Rest.php) } } } else { // no url segment: handle POST requests } // render the response and body http_response_code($statuscode); header($header); echo json_encode($response); Lets brake this down: First we check for a numeric url segment which is our $pageId. Then the Rest Request class comes into play and checks what type of request is coming in from the client. For the GET request, we want to return all data that is stored for a page plus the page id. This is all good old PW API code. I am using the $pdata array to store all page data. Then I am handing this array over to the $response variable. This will be used further down to render the JSON response body. If the page does not exist, I am setting an error message for the $response and a status code 404 so the client will know what went wrong. The else statement will later hold our POST request handling. The last 3 lines of code are setting the header and status code for the response and print out the response body that is sent back to the client. You can now browse to https://mysite.dev/api/pages/1 where you should see a JSON string with field names and values of your home page. If you enter a page id which does not exist you should see a JSON string with the error message. Lets move on to updating pages through a PUT request 3.2 update pages with a PUT request Since our API needs to know the id of the page we want to update, we again need to append an id to our endpoint url. In this example we will update the title and name of our homepage. So the request url will be: https://mysite.dev/api/pages/1. For the GET request above, anyone can connect to our API to retrieve page data. For the PUT request this is not a good idea. Thus we will add basic authentication so that only authorized clients can make updates. I use basic HTTP authentication with username and password. In combination with the https protocol this should be fairly safe. To set this up, we need an API key for the password and a username of our choosing. We add the API key in the PW backend: add a new text field "key" and assign it to the "api" template. edit the "api" page, enter your key and save. (I am using 123456 as key for this tutorial) Now add following code right after the if(Rest\Request::is('get')) {...} statement: // PUT request: update data of existing page if(Rest\Request::is('put')) { // get data that was sent from the client in the request body + username and pass for authentication $params = Rest\Request::params(); // verify that this is an authorized request (kept very basic) $apiKey = $pages->get("template=api")->key; $apiUser = "myapiuser"; if($params["uname"] != $apiUser || $params["upass"] != $apiKey) { // unauthorized request $response["error"] = "Authorization failed"; $statuscode = 401; // Unauthorized (see /site/templates/inc/Rest.php) } else { // authorized request // get the page for given Id $p = $pages->get($pageId); if($p->id) { $p->of(false); $p->title = $sanitizer->text($params["title"]); $p->name = $sanitizer->pageName($params["name"]); $p->save(); $response["success"] = "Page updated successfully"; } else { // page does not exist $response["error"] = "The page does not exist"; $statuscode = 404; // Not Found (see /site/templates/inc/Rest.php) } } } Breakdown: We check if the request from the client is a put request. All data that was sent by the client is available through the $params array. The $params array also includes $params["uname"] and $params["upass"] which hold our API user and key. We set API key and user and check if they match with the values that were sent by the client. If they don't match we store an error message to the response body and set the appropriate status code. If authentication went through ok, we get the page via PW API and update the values that were sent in the request body by the client. Then we put out a success message in the response body. If the page does not exist, we send the appropriate response message and status code, just like in the GET request example above. Now you might wonder how the client sends API user/key and new data for updating title and name. This is covered further down in step 4. If you want to test the PUT request right now, head down there and follow the instructions. If not, continue reading on how to setup a POST request for creating new pages. 3.2 create new pages with a POST request Final part of the coding part is creating new pages through our API. For this to work we need to implement a POST request that sends all the data that we need for page creation. We will do this at our endpoint: https://mysite.dev/api/pages/ Paste following code within the else statement that has the comment "// no url segment: handle POST requests": // POST request: create new page if(Rest\Request::is('post')) { // get data that was sent from the client in the request body + username and pass for authentication $params = Rest\Request::params(); // verify that this is an authorized request (kept very basic) $apiKey = $pages->get("template=api")->key; $apiUser = "myapiuser"; if($params["uname"] != $apiUser || $params["upass"] != $apiKey) { // unauthorized request $response["error"] = "Authorization failed"; $statuscode = 401; // Unauthorized (see /site/templates/inc/Rest.php) } else { // authorized request // create the new page $p = new Page(); $p->template = $sanitizer->text($params["template"]); $p->parent = $pages->get($sanitizer->text($params["parent"])); $p->name = $sanitizer->pageName($params["name"]); $p->title = $sanitizer->text($params["title"]); $p->save(); if($p->id) { $response["success"] = "Page created successfully"; $response["url"] = "https://mysite.dev/api/pages/{$p->id}"; } else { // page does not exist $response["error"] = "Something went wrong"; $statuscode = 404; // just as a dummy. Real error code depends on the type of error. } } } You already know what most of this code is doing (checking authorisation etc.). Here's what is new: We create a page through the PW API and assign it a template, a parent and basic content that was sent by the client. We check if the page has been saved and update our response body array with a success message and the URL that this page will be accessible at through the API for future requests. The client can store this URL for making GET or PUT requests to this page. If you're still reading, you have made it through the hard part of this tutorial. Congratulations. Having our code for reading, updating and creating pages, we now need a way to test the whole scenario. Read on to find out how this can be done. 4. test the whole setup with a Browser REST Client Addon The link in the heading will take you to a place from which you can install the very useful RESTClient addon to your favorite browser. I am using it with Firefox which is still the dev browser of my choice. Open a RESTClient session by clicking the little red square icon in the browsers addon bar. The UI is pretty straightforward and intuitive to use. 4.1 test the GET request Choose Method GET and fill in the URL to our endpoint. If you do not have a SSL setup for testing, just use http://yourrealtestdomain.dev/api/pages/1. If you happen to have a SSL test site with a self signed certificate, you need to point your browser to the URL https://yourrealtestdomain.dev/api/pages/ first in your test browser and add the security exception permanently. Otherwise RESTClient addon won't be able to retrieve data. If you have a test site with a 'real' SSL certificate, everything should be fine with using the https://... URL Hit send. In the Response Headers tab you should see a Status Code 200 and in the Response Body tabs a JSON string with data of your page. now change the 1 i the URL to some id that does not exist in your site and hit send again. You should get a 404 Status Code in the Response Headers tab and an error message "{"error":"The page does not exist"}" in the Response Body (Raw) tab. If you get these results, congrats! The GET request is working. For further testing you can save this request through the top menu Favorite Requests->Save Current Request. 4.1 test the PUT request Choose Method PUT and fill in the URL to our endpoint ending with 1 (http://yourrealtestdomain.dev/api/pages/1). In the top left click Headers->Content-Type: application/json to add the right content type to our request. If you miss this step, the request will not work. You will now see a "Headers" panel with all your headers for this request Click on Authentication->Basic Authentication. In the modal window that pops up, fill in user (myapiuser) and password (your API key). Check "Remember me" and hit Okay. You now should see Content-Type and Authorization headers in the "Headers" panel. Next, we need to send some data in the request body for updating our page title and name. Since we're using JSON, we need to create a JSON string that contains the data that we want to send. As I will update the home page for this example, my JSON reads { "title" : "Newhome", "name" : "newhome" } Be careful that you have a well formed string here. Otherwise you will get errors. Paste this into the "Body" panel of the REST Client addon. Hit send. In the Response Headers tab you should see a Status Code 200 and in the Response Body tabs a JSON string "{"success":"Page updated successfully"}". Now go to the PW backend and check if title and name of your page have been updated. If yes, congrats again. 4.2 test the POST request Choose Method POST and fill in the URL to our endpoint without any page id (http://yourrealtestdomain.dev/api/pages/). In the top left click Headers->Content-Type: application/json to add the right content type to our request. If you miss this step, the request will not work. You will now see a "Headers" panel with all your headers for this request Click on Authentication->Basic Authentication. In the modal window that pops up, fill in user (myapiuser) and password (your API key). Check "Remenber me" and hit Okay. You now should see Content-Type and Authorization headers in the "Headers" panel. Next, we need to send some data in the request body for updating our page title and name. Since we're using JSON, we need to create a JSON string that contains the data that we want to send. I will create a new page with template basic-page and parent /about/ for this example, my JSON reads { "template" : "basic-page", "parent" : "/about/", "title" : "New Page created through api", "name" : "newapipage" } Be careful that you have a well formed string here. Otherwise you will get errors. Paste this into the "Body" panel of the REST Client addon. Hit send. In the Response Headers tab you should see a Status Code 200 and in the Response Body tabs a JSON string "{"success":"Page created successfully","url":"https:\/\/mysite.dev\/api\/pages\/1019"}". Now go to the PW backend and check if title and name of your page have been updated. If yes, you're awesome! Summary By now you have learned how to build a simple REST API with ProcessWire for exchanging data with mobile devices or other websites. Notes I tested this on a fresh PW 2.7.2 stable install with the minimal site profile and can confirm the code is working. If you experience any difficulties in getting this to work for you, let me know and I will try to help. There purposely is quite a lot of repetion in the example code to make it easier to digest. In real life code you might not want to use procedural coding style but rather separate repeating logic out into classes/methods. Also, in life applications you should do more sanity checks for the authentication of clients with the API / for the data that is delivered by the client requests and more solid error handling. I skipped these to make the code shorter. RESTful services are by definition stateless (sessionless). My implementation within PW still opens a new session for each request and I haven't found a way around that yet. If anyone can help out this would be much appreciated. And finally big thanks to clsource for putting the Rest.php classes together. pages.php.zip
    1 point
  19. You could check with your host to see if they are running mod_security. I just struck the same 404 problem when I use an include in my Hanna PHP code. My host lets me disable mod_security via htaccess, and sure enough when I disable it the 404 problem is gone.
    1 point
  20. Exactly as @Craig said: for example, if you want import two templates that have a parent / children family relation, it cannot be imported in one go, as for both the other referenced family template isn't known in the system. But it works fine, if you first let out the family settings. Click / select YES for "Create this template?" for both templates and commit the form. In the next screen, you simply click on ok and in the next step you paste in the data again and click peview. Now you will see both templates with the opened settings. One shows the new property "parentTemplates" and the other a new property "childTemplates". Lastly click commit changes and you are done.
    1 point
  21. Database migrations are "better" supported in other frameworks because they wouldn't function without them. A laravel framework doesn't have any databases beside those setup by migrations. But these are actually really bare bone in that they mostly update schema changes. For migrating db rows most use database seeding libraries, which then create the necessary models to be stored. Both can be done with the processwire api without actually needing to learn something new. This is basically what model factories do in frameworks (besides the autofill functionality). $pages->add('basic-page', $pages->get("/url/"), 'profile', array( "someField" => "value", "anotherField" => 202 ); And this may be what migrations do. // Create field $f = new Field(); $f->name = 'text'; $f->type = 'FieldtypeTextarea'; … $f->save(); // Add to template $bpt = $templates->get('basic-page'); $bpt->fields->add($f); $bpt->fields->save();
    1 point
  22. You know, I've encountered Suhosin three times on various shared hostings so far — and it was always interfering with running websites we put there. "Suhosin, premiere tool for misconfiguration of PHP."
    1 point
  23. Probably WordPress. I'm guessing its default configuration has at least something to do with WP. There doesn't seem to be anything inherently wrong with Suhosin (rather, just how it's configured), and in fact it seems like it could be quite a useful security tool. But it would have to be configured for the software it's running with, otherwise seems like it's very likely going to interfere with that software.
    1 point
  24. thank you adrian, never used those configs so i was not aware of them. thanks for the adivce! will use wireTempDir here and in future
    1 point
  25. Sure that will work, but I still think it's better to create the PDF in a temp directory Also, just an FYI, building the path up using: $path = wire('config')->paths->files . $page->id . '/'; is not ideal. This is better: $page->filesManager()->path() because it handles custom paths just in case you have $config->pagefileSecure set to true, or if you have $config->pagefileExtendedPaths true. But still if you are using a temp dir, then this won't matter anymore
    1 point
  26. If someone really needs lots of free emails (100.000): https://www.sparkpost.com/pricing And a blogpost regarding future plans https://www.sparkpost.com/blog/my-promise-to-developers-sparkpost-pricing/
    1 point
  27. Hi @adrian - awesome! Thanks for keeping me in the loop. If Ryan has deemed it "beta", it's safe to say that it will me eminently useable Time to dive in!
    1 point
  28. Any chance it has to do with where you are generating the pdf? I would do it in a temp directory first - I think there is likely a conflict with saving it to the assets/files/xxxx folder of the page and then adding it to the same page.
    1 point
  29. New version just posted today as beta:
    1 point
  30. All the $pages->find() calls are evaluated while creating $stats, which in turn means $stats is not defined at that moment. If you want the count to run "on-demand" you'd need to look into using anonymous functions. Also using $pages->find()->count is most of the time an anti-pattern. If you're not using those pages, but only the count rather user $pages->count($selector). The latter does only count the pages in the db, whereas your version does load all those pages and does count them in memory (php). Edit: Just to make things more clear about the array creation. You're building up an array and after the whole array is computed – with all dynamic calls being evaluated to static values – you're saving the data into the variable $stats. You could go another way like this: $stats = array(); // Create array; $stats !== undefined $stats["total"] = array( ["main"] => array(), ["yes"] => array() ); // Update each value one at a time $stats["total"]["main"]["selector"] = "template=50, parent=$formSuperSelector"; $stats["total"]["main"]["count"] = $this->pages->count($stats['total']['main']['selector']); …
    1 point
  31. I would suggest considering moving a smaller site where this happens regularly to another server with a different host if you can to see if that resolves things. That would pinpoint a server config issue right away if the issue is happening a lot.
    1 point
  32. You did it again adrian Thanks a lot! Regarding protocol handlers, these links might be of interest too: https://pla.nette.org/en/how-open-files-in-ide-from-debugger and https://github.com/aik099/PhpStormProtocol
    1 point
  33. bumped version -> 0.0.9 Changed style injection, now prepends to first <link> in head makes it easier to add custom css tweaks without the need for !important because of the cascading order Still need to get the language fields to work
    1 point
  34. Imagefield for gravatar image to capture: https://de.gravatar.com/site/implement/images/php/ great project! Thumbs up regards mr-fan
    1 point
  35. Sneak peek : https://github.com/plauclair/WireMailMailgun I've started implementing Mailgun. It's mostly working except for some stuff. Look at the tags for 0.1 prerelease.
    1 point
  36. Version 1.1.0 is now released. It's been submitted to the module directory so should appear there soon. In the meantime, it's available on GitHub: https://github.com/gRegorLove/ProcessWire-Webmention. Please refer to the updated README there and let me know if you have any questions!
    1 point
  37. At the moment: In the plugin.js of the image plugin for the CKEditor the alignment class for the image will be added in the following ways. 1) Image without link and without caption: alignment class will be added to the image tag - OK (outermost tag) 2) Image without link and with caption: alignment class will be added to the figure tag - OK(outermost tag) 3) Image with link and without caption: alignment class will be added to the img tag, but this is not the outermost tag - NOT OK 4) Image with link and with caption: alignment class will be added to the figure tag - OK (outermost tag) Point 3: The alignment class should always be added to the outermost tag (in this case the anchor tag), because it would make it much easier for markup manipulation via textformatter modules. At the moment it is a very hard struggle if you want to work with image markup manipulation to get the image alignment to work. So it would be great to change the plugin.js to add the alignment class always to the outermost tag of the image part.
    1 point
  38. i use this, which was posted by willyc somewhere, just pulled it from my code snippets: what.i use this is good it does.work top {not buttock}, of htaccess u will.put it . enjoy <IfModule mod_expires.c> ExpiresActive On ExpiresDefault "access plus 1 seconds" ExpiresByType image/x-icon "access plus 1 year" ExpiresByType image/jpeg "access plus 1 year" ExpiresByType image/png "access plus 1 year" ExpiresByType image/gif "access plus 1 year" ExpiresByType text/css "access plus 1 month" ExpiresByType text/javascript "access plus 1 month" ExpiresByType application/octet-stream "access plus 1 month" ExpiresByType application/x-javascript "access plus 1 month" </IfModule> <IfModule mod_headers.c> <FilesMatch "\\.(ico|jpe?g|png|gif|swf|woff)$"> Header set Cache-Control "max-age=31536000, public" </FilesMatch> <FilesMatch "\\.(css)$"> Header set Cache-Control "max-age=2692000, public" </FilesMatch> <FilesMatch "\\.(js)$"> Header set Cache-Control "max-age=2692000, private" </FilesMatch> <FilesMatch "\.(js|css|xml|gz)$"> Header append Vary: Accept-Encoding </FilesMatch> Header unset ETag Header append Cache-Control "public" </IfModule> <IfModule mod_deflate.c> AddOutputFilter DEFLATE js css AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html </IfModule>
    1 point
×
×
  • Create New...