Jump to content

Hector Nguyen

Members
  • Posts

    20
  • Joined

  • Last visited

About Hector Nguyen

  • Birthday September 23

Contact Methods

  • Website URL
    https://octopius.com

Profile Information

  • Gender
    Male
  • Location
    The Colony

Recent Profile Visitors

1,645 profile views

Hector Nguyen's Achievements

Jr. Member

Jr. Member (3/6)

32

Reputation

  1. @MoritzLost Do you encountered any issue with the autoloader? I used this inside public/site/init.php require __DIR__ . '/../vendor/autoload.php'; But the autoloader cannot work properly, it is saying my namespace is undefined 2021-04-14 08:38:10 nntoan http://xxx.test/ Fatal Error: Uncaught Error: Call to undefined function QFramework\Phrase() in /srv/users/capima/webapps/xxx/releases/1/public/site/templates/home.php:10 Stack trace: #0 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/TemplateFile.php(318): require() #1 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/Wire.php(394): ProcessWire\TemplateFile->___render() #2 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/WireHooks.php(823): ProcessWire\Wire->_callMethod() #3 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/Wire.php(465): ProcessWire\WireHooks->runHooks() #4 /srv/users/capima/webapps/xxx/releases/1/public/wire/modules/PageRender.module(536): ProcessWire\Wire->__call() #5 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/Wire.php(397): ProcessWire\PageRender->___renderPage() #6 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/WireHooks.php(823): ProcessWire\Wire->_callMethod() #7 /srv/users/capima/webapps/xxx/releases/1/public (line 10 of /srv/users/capima/webapps/xxx/releases/1/public/site/templates/home.php) Any ideas?
  2. @ryan I'm afraid not, fget() in PHP will return one string (a line) but fgetcsv() will return an indexed array. An array in PHP is not very friendly with big dataset because it's still saved into the memory! The main difference is in memory usage. In the first case all the data are in memory when you return the array. Because of no matter what, if you "returns" values, that will be save in memory. In the second case you get a lazy approach, and you can iterate the values without keeping all of them in memory. This would be a great benefit when memory is a constraint compared to the size of the sum of all your data. I've managed to import 4GB CSV file on the Shared VPS 1GB mem in seconds, the bottleneck of "yield" method now is your MySQL.
  3. I think you could try the generators here while you are playing with CSV. For .eg <?php function getRows($file) { $handle = fopen($file, 'rb'); if ($handle === false) { throw new Exception('open file '.$file.' error'); } while (feof($handle) === false) { yield fgetcsv($handle); } fclose($handle); } // allocate memory for only a single line in the csv file // do not need the entire csv file is read into memory $generator = getRows('../data/20_mil_data.csv'); // foreach ($it as $row) {print_r($row);} while ($generator->valid()) { print_r($generator->current()); //$generator->current() is your $row // playing with ProcessWire here $generator->next(); } $generator->rewind(); // http://php.net/manual/en/class.generator.php That's always my #1 choice while working with big datasets in PHP.
  4. Will all plugins from 2.7.x or 2.8.x will works on 3.x version? In the plugin directory I saw only few plugins compatiable with 3.x, that is why I asked 3.x or 2.x. Another question is can I manage my own plugin via composer? Or can we install ProcessWire via composer? For some not so secret reasons, I marriaged with Composer
  5. Regarding to #5 (config files), ProcessWire does store database configuration in site/config.php, isn't it?
  6. Hi @szabesz, thanks for your interesting. I've created poor documentation here: https://magephp.github.io I'm using GitLab and GitLab CI for all projects I had. So the purpose of the tool is, whenever I have a new push or accept merge request to production branch (master) or staging branch on GitLab. GitLab CI will start building my project and execute Magallanes deploy command. The repo changes content will be pulled to my server as a new release, and that release directory will be symlink to my `public_html` directory. I'm using 2 environments: production and staging. My magallanes config for staging should look like this (located at <project_folder>/.mage/config/environments/staging.yml) # staging environment deployment: strategy: git-remote-cache # git clone a bare repository on your server, then using git archive to get your code faster than usual ways user: www # your webserver user, or whatever user you want if it had read write access to webserver group port: 998 # ssh port from: ./ # local source code directory, e.g if you put source code into src/ directory, you need to change this to: /srv/users/www/apps/magento # your remote directory which contains public directory excludes_file: .rsync_excludes # all files/directories declared in this file will be deleted if you are using rsync strategy extras: enabled: true # don't turn it off :) directory: shared # your bloody share directory on the server vcs: enabled: true kind: git repository: git@gitlab.com:xxx/magento.git branch: staging # change if you are using different branch, obviously remote: origin # looks at above comment directory: repo rsync: enabled: false # set to true if you are using rsync-remote-cache strategy from: ./ local: .rsync_cache remote: cached-copy magento: enabled: false # set to true if you are deploying magento application app_path: bin/magento # magento executable file shared: # most important section enabled: true # always set to enable linking_strategy: absolute # i do not suggest you change this to relative linked_files: # contains all files you want to keep every deploy - app/etc/env.php - var/.magento_cronjob_status - var/.setup_cronjob_status - var/.update_cronjob_status - sitemap.xml linked_folders: # directory contains dynamic assets like logs, product images, session you want to keep - pub/media - var/log - var/session releases: enabled: true # i recommend you turn on release mode :) max: 10 # how many release you want to keep in releases directory symlink: public # what is the symlink name of every release after deploy, it may be public, public_html... based on your webserver configurations directory: releases # the directory you keep releases, it will be created inside 'to' of 'deployment' section hosts: - bloody-production-machine # your ip, ssh aliases. i do recommend you use ssh aliases for the god sake. tasks: pre-deploy: # task will be execute before deploy on-deploy: # task will be execute while deploying post-release: # task will be execute right after a release created - composer/update - magento/staging-setup - magento/set-permissions - filesystem/link-shared-files post-deploy: # tasks will be execute right after all above tasks section complete So with the above configuration, Magallanes will work like this as sequence (all remote tasks in Magallanes will open new ssh connection): Check your SSH is fine and created extras >> directory on your server if it didn't exists. Clone a bare repository into 'shared' directory from extras >> vcs >> repository. If releases >> enabled == true, then create new directory on your server named releases >> directory. Using git archive from bare repository to a new directory in releases >> directory >> unix_time Check tasks section and execute all tasks inside post-release If no error, then created new symlink in your deployment >> to under name releases >> symlink Done. If you need further information, please let me know. Sorry for my bloody English.
  7. Hello everyone, I didn't play much with ProcessWire, my project has just started and I have no experience about how ProcessWire will be when go live. I'm using a tool to deploy my code automatically to production server after build passed (Magallanes). Please give me the answers for the questions below: Which folders/files will be created by the application after installed ? Which folders/files usually be overwritten? Which folders/files using as cache, as session, as temporary? Do ProcessWire supports logs? If so, where is it? Which files are using as configuration files? Should I concern about template configuration files? Any help would be appreciated. Thank you!
  8. One more question, which version I should use? PW 2.7 or 3.0? Is there any comparison between 2 version above? I tried myself both of them and see no differences at all.
  9. I'm sorry if content is duplicated, but I'm wondering if it work with 3.0 version?
  10. Thanks to @mr-fan and the others It's more clearly to me now, so I may have Categories/Category, Books/Book but Chapter will standalone? I also taking a note and stick it into my screen: "Pages is everything in PW". Thank you (all of you and yes you) once again, I'm trying it right now and will come back soon
  11. Hi @szabesz, As my understanding, you recommend me to use one-to-many to solve relationship in my case, right? But there are two things confusing me right now: How does Chapters/Chapter make sense in my case? Because there are thoudsand of chapters for a book, so if I use relationship like Chapters/Chapter, it mean one "Chapters" template contains million of "Chapter" templates. How can I living among them? Quite confused to me
  12. I know that package, but seems people forgot about them nowadays so I pick that name
  13. I'm using NodeQuery to monitor all servers I have. It is SaaS and quite useful to me.
  14. Actually, mobile shell is what I'm using at this moment not mine script. But mosh is in heavily development and it denied to work with mouse scroll then in few hosting providers, they doesn't allow you to white list port range (scaleway.com for example). That is why I have to write this to use it in few case. Like SSH tunneling or NAT....
  15. Hello there, If you are the guy who is living in terminal like me, then I assume that you will love this script If you usually SSH-ing to servers, then I believe you used to have headache when you are trying to keep your SSH sessions alive. Most of them can resolve by adding this to your ~/.ssh/config Host * ServerAliveInterval 60 But if you are working in a company or you are connecting to Interet through VPN/Proxy, then your SSH sessions are really unstable. Like this: Write failed: broken pipe packet_write_wait connection to: XXXXX To use this script, you can download the attach file below and move it to /usr/local/bin or /usr/bin or ~/bin or just simple put it anywhere you want. I assume that you will put it into /usr/local/bin, then you can use autossh command globally. USAGE It's simple, you can just run autossh alone or with parameters autossh your_user@your_server_ip Below is the whole script #!/bin/bash # ------------------------------------------------------------------------------ # FILE: autossh # DESCRIPTION: This is an SSH-D proxy with auto-reconnect on disconnect # AUTHOR: Hector Nguyen (hectornguyen at octopius dot com) # VERSION: 1.0.0 # ------------------------------------------------------------------------------ VERSION="1.0.0" GITHUB="https://github.com/hectornguyen/autossh" AUTHOR="Hector Nguyen" SCRIPT=${0##*/} IFS=$'\n' ALIVE=0 HISTFILE="$HOME/.autossh.history" # Use colors, but only if connected to a terminal, and that terminal supports them. if which tput >/dev/null 2>&1; then ncolors=$(tput colors) fi if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then RED="$(tput setaf 1)" GREEN="$(tput setaf 2)" YELLOW="$(tput setaf 3)" BLUE="$(tput setaf 4)" BOLD="$(tput bold)" NORMAL="$(tput sgr0)" else RED="" GREEN="" YELLOW="" BLUE="" BOLD="" NORMAL="" fi # Progress or something start_progress() { while true do echo -ne "#" sleep 1 done } quick_progress() { while true do echo -ne "#" sleep .033 done } long_progress() { while true do echo -ne "#" sleep 3 done } dot_progress() { for i in {1..100}; do printf "." $i -1 $i sleep .033 done echo_c green " 100%{$NORMAL}" sleep 1 } stop_progress() { kill $1 wait $1 2>/dev/null echo -en "\n" } # Case-insensitive for regex matching shopt -s nocasematch # Prepare history mode set -i history -c history -r # Input method get_input() { read -e -p "${BLUE}$1${NORMAL}" "$2" history -s "${!2}" } # Echo in bold echo_b() { if [ "$1" = "-e" ]; then echo -e "${BOLD}$2${NORMAL}" else echo "${BOLD}$1${NORMAL}" fi } # Echo in colour echo_c() { case "$1" in red | r | -red | -r | --red | --r ) echo "${RED}$2${NORMAL}" ;; green | g | -green | -g | --green | --g ) echo "${GREEN}$2${NORMAL}" ;; blue | b | -blue | -b | --blue | --b ) echo "${BLUE}$2${NORMAL}" ;; yellow | y | -yellow | -y | --yellow | --y ) echo "${YELLOW}$2${NORMAL}" ;; * ) echo "$(BOLD)$2$(RESET)" ;; esac } # Get data from parameters if [[ ! -n "$remote_param" && -n "$1" ]]; then remote_param="$1" remote_user="${remote_param%%@*}" remote_ip="${remote_param##*@}" fi # Get input data and save to history save_input() { if [[ ! -n "$remote_user" && ! -n "$1" ]]; then while get_input "SSH Username > " remote_user; do case ${remote_user%% *} in * ) if [ -n "$remote_user" ]; then break else continue fi ;; esac done fi if [[ ! -n "$remote_ip" && ! -n "$1" ]]; then while get_input "SSH Alias/IP-address > " remote_ip; do case ${remote_ip%% *} in * ) if [ -n "$remote_ip" ]; then break else continue fi ;; esac done fi } # Infinitie loop to keep connecting auto_connect() { while true; do exist=`ps aux | grep "$remote_user@$remote_ip" | grep 22` if test -n "$exist" then if test $ALIVE -eq 0 then echo_c yellow "I'm alive since $(date)" fi ALIVE=1 else ALIVE=0 echo_c red "I'm dead... God is bringing me back..." clear printf "${GREEN}Connecting: " for i in {1..100}; do printf "." $i -1 $i sleep .033 done echo_c green " 100%${NORMAL}" sleep 1 clear ssh $remote_user@$remote_ip fi sleep 1 done } main() { save_input auto_connect } main Hope this helps. autossh.sh
×
×
  • Create New...