Jump to content

jom

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

1 Neutral

About jom

  • Rank
    Jr. Member

Profile Information

  • Gender
    Male
  • Location
    Zürich
  1. I just noticed that the folders are deleted now. So maybe it's all fine now. Does anyone know how the deletion works? Does PW execute some cron jobs from time to time which runs over the temp directory?
  2. Olsa, thanks for Somas post. This might be a solution, if wireTempPath won't work. I still hope to get to work. The question is actually quite simple: How can I get wireTempPath to delete it's folders?
  3. Hi everyone It seems that I don't fully understand the wireTempPath() function and I need some help. I use wireTempPath() to create a new location in assets/cache/WireTempDir and than copy a pdf from the assets/files/page folder to the new folder. I want the file to be accessible only for a limited time, that's why I use wireTempPath. The file seems to be copied to the right location, but gets deleted right afterwards, according to As mentioned in the topic above, $wireTempDir->setRemove(false); prevents the file to be deleted. But I like the file to be automatically deleted after a few days. So, how can I do that? My code so far (everything works, but the automatic removal of the tempDir folder): //generate and show download link $folder = time(); // timestamp as temporary folder $maxAge = (int) $settings->options_downloadlink_valid_hours * 3600; //tempDir wants maxAge as seconds $options = array( 'maxAge' => $maxAge ); $wireTempDir = wireTempDir($folder, $options); $wireTempDir->setRemove(false); $src_file = $page->ebook_download->filename; // Create a new directory in ProcessWire's cache dir if(wire('files')->mkdir($wireTempDir, $recursive = true)) { if(wire('files')->copy($src_file, $wireTempDir)){ //get subdirs from tempDir: $pos = strpos($wireTempDir, "WireTempDir"); $subdir = substr($wireTempDir, $pos, 100); $out .= "<p><a href='" . wire('pages')->get('template=passthrough')->httpUrl . "?file=" . $subdir . $page->ebook_download->basename . "' target='_blank'>$page->title</a></p>"; } } I appreciate any ideas - thanks! Oliver
  4. Hi I'm also playing with Ryans new module LOGIN/REGISTER/PROFILE (LoginRegister). I wonder if it's possible (and if it makes sense) to use it without the login/profile functionality. I need a registration form for the newsletter. The only fields I need are mail address and user name. I can't get rid of the required password field. I could hide it in the frontend of course and fill it with some generated data, but that's not very elegant. Actually I already have a working solution with SimpleForms, I just thought it might be more straight forward to use Ryans basic module. - What do you think?
  5. In the meantime I got the mod_rewrite error log. Indeed, there is an error regarding my topic: [Wed Oct 05 18:42:00.073367 2016] [core:error] [pid 85277:tid 34669321216] (63)File name too long: [client 12.34.56.78:54004] AH00127: Cannot map GET [here follows the repeated path] It seems the paths gets in a loop - it is about 10 times repeated in one string. Now I'm lost - where to go from here? By the way in the first repetition the path is including the urlSegments, the following repetitions are without.
  6. I'm on a shared hosting environment, not sure if I can get the mod_rewrite logs. I asked the hoster for it. In the meantime I implemented a workaround. Since the urlSegments are not vital, I skip them, if a 404 is based on a too long page name. For this I wrote a function which hookes into pageNotFound: public function init() { $this->addHookBefore('ProcessPageView::pageNotFound', $this, 'displayItem'); } public function displayItem($event){ // redirects to url without segments, if there is a long page name involved $pageWithLongTitle = ''; $file = parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH); $urlParts = explode('/', $file); $urlWithoutSegments=''; foreach($urlParts as $urlPart){ if($urlPart){ $urlWithoutSegments .= $urlPart.'/'; if (strlen($urlPart) > 110){ $pageWithLongTitle = $urlPart; break; } } } // if there is a long title in url and a page with this title exists, then redirect if($pageWithLongTitle && wire('pages')->get("title=$pageWithLongTitle")){ wire('session')->redirect('/'.$urlWithoutSegments); } } Thanks for the help!
  7. Thanks for your input, BitPoet. I've tried with several computers and browsers on different locations. No difference. The request URI matches to the logged URI. Not sure about the "it" parameter: I inserted echo wire('input')->get('it'); into index.php. No result. Did I get you wrong? There weren't any third party modules present when the problem occured the first time (apart of ImportPagesCS). Where would you set the limit for the page name? Add a module to the page->save hook?
  8. Yes, urlSegments are enabled, everything works fine with shorter urls.
  9. Hi everyone Since this is my first post: thanks for this beautiful CMS/CMF und thanks for this high quality forum. I'm struggling with following: A client imports a list of books from a csv file (using module ImportPagesCSV). The page name gets autogenerated by the core (it takes the value from the title field). If the title is very long, the page name gets truncated to 128 characters. This works fine, as long as I don't use urlSegments. Using them, PW returns a 404: http://domain.tld/...long-title-hier-128-characters.../ : works http://domain.tld/...long-title-hier-128-characters.../urlSegment1/urlSegment2/ : does not work It works fine, when I shorten the page name manually to e.g. 100 characters. This is not a good practice, since the client imports the list every couple of weeks. Is this a correct behaviour? Should I limit the page name via API to 100 characters? If yes, would I do this in the ImportPagesCSV-Module or better implement as a hook? Thanks in advance for some light in the dark! Oliver
×
×
  • Create New...