Jump to content

Hector Nguyen

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by Hector Nguyen

  1. @MoritzLost Do you encountered any issue with the autoloader? I used this inside public/site/init.php

    require __DIR__ . '/../vendor/autoload.php';

    But the autoloader cannot work properly, it is saying my namespace is undefined

    2021-04-14 08:38:10	nntoan	http://xxx.test/	Fatal Error: 	Uncaught Error: Call to undefined function QFramework\Phrase() in /srv/users/capima/webapps/xxx/releases/1/public/site/templates/home.php:10 Stack trace: #0 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/TemplateFile.php(318): require() #1 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/Wire.php(394): ProcessWire\TemplateFile->___render() #2 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/WireHooks.php(823): ProcessWire\Wire->_callMethod() #3 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/Wire.php(465): ProcessWire\WireHooks->runHooks() #4 /srv/users/capima/webapps/xxx/releases/1/public/wire/modules/PageRender.module(536): ProcessWire\Wire->__call() #5 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/Wire.php(397): ProcessWire\PageRender->___renderPage() #6 /srv/users/capima/webapps/xxx/releases/1/public/wire/core/WireHooks.php(823): ProcessWire\Wire->_callMethod() #7 /srv/users/capima/webapps/xxx/releases/1/public (line 10 of /srv/users/capima/webapps/xxx/releases/1/public/site/templates/home.php)

    Any ideas?

  2. 14 hours ago, ryan said:

    @Hector Nguyen This is cool to see generators in action. Though as far as I know, PHP's fgetcsv() never loads the whole file in memory at the same time, regardless of which method is used to call it. I think it just loads one line at a time (?), but this reminds me that an optimization to fgetcsv() is to tell it what the longest possible line might be (as 2nd argument), so that it doesn't have to figure it out. Fedeb's example has 0 as the 2nd argument to fgetcsv(), which means "let PHP figure it out", so some overhead could be reduced here by giving it a number like 1024 or whatever the largest line length (in bytes) might be. There may be other benefits to using generators here though? I haven't experimented with them much yet so am curious. 

    @ryan I'm afraid not, fget() in PHP will return one string (a line) but fgetcsv() will return an indexed array. An array in PHP is not very friendly with big dataset because it's still saved into the memory!

    The main difference is in memory usage. In the first case all the data are in memory when you return the array. Because of no matter what, if you "returns" values, that will be save in memory.

    In the second case you get a lazy approach, and you can iterate the values without keeping all of them in memory. This would be a great benefit when memory is a constraint compared to the size of the sum of all your data.

    I've managed to import 4GB CSV file on the Shared VPS 1GB mem in seconds, the bottleneck of "yield" method now is your MySQL.

    • Like 5
  3. I think you could try the generators here while you are playing with CSV. For .eg

    <?php
    function getRows($file) {
        $handle = fopen($file, 'rb');
        if ($handle === false) {
            throw new Exception('open file '.$file.' error');
        }
        while (feof($handle) === false) {
            yield fgetcsv($handle);
        }
        fclose($handle);
    }
     
    // allocate memory for only a single line in the csv file
    // do not need the entire csv file is read into memory
    $generator = getRows('../data/20_mil_data.csv');
     
    // foreach ($it as $row) {print_r($row);}
    while ($generator->valid()) {
        print_r($generator->current()); //$generator->current() is your $row
        // playing with ProcessWire here
        $generator->next();
    }
    $generator->rewind();
     
    // http://php.net/manual/en/class.generator.php

    That's always my #1 choice while working with big datasets in PHP.

    • Like 8
  4. 1 hour ago, LostKobrakai said:
    • Which folders/files will be created by the application after installed ?
      After installation usually the only place where folders/files are created by ProcessWire is inside /site/assets. But 3rd party modules may also use different places.
    • Which folders/files usually be overwritten?
      Same answer as above
    • Which folders/files using as cache, as session, as temporary?
      /site/assets/files /site/assets/cache /site/assets/sessions
    • Do ProcessWire supports logs? If so, where is it?
      /site/assets/logs, $log
    • Which files are using as configuration files? Should I concern about template configuration files?
      That's the place where deployment tools won't help, as ProcessWire doesn't store configuration in files (for the most part anyways). There are a few tools out there to accomodate for that fact, but iirc my Migrations module is the only one you could run via cli by a deployment script.

     

    Regarding to #5 (config files), ProcessWire does store database configuration in site/config.php, isn't it?

  5. 11 minutes ago, szabesz said:

    @Hector Nguyen I've never heard of it but Magallanes does look "interesting" indeed. So I'm interested in how you will set up your projects. I would be happy to improve my workflow if Magallanes can be the tool to do so.
    As for your actual question, I could answer it too but let's leave it to more experienced ProcessWire developers. I might miss something...

    Hi @szabesz, thanks for your interesting.

    I've created poor documentation here: https://magephp.github.io

    I'm using GitLab and GitLab CI for all projects I had. So the purpose of the tool is, whenever I have a new push or accept merge request to production branch (master) or staging branch on GitLab. GitLab CI will start building my project and execute Magallanes deploy command. The repo changes content will be pulled to my server as a new release, and that release directory will be symlink to my `public_html` directory.

     

    I'm using 2 environments: production and staging.

    My magallanes config for staging should look like this (located at <project_folder>/.mage/config/environments/staging.yml)

    # staging environment
    deployment:
      strategy: git-remote-cache # git clone a bare repository on your server, then using git archive to get your code faster than usual ways
      user: www # your webserver user, or whatever user you want if it had read write access to webserver group
      port: 998 # ssh port
      from: ./ # local source code directory, e.g if you put source code into src/ directory, you need to change this
      to: /srv/users/www/apps/magento # your remote directory which contains public directory
      excludes_file: .rsync_excludes # all files/directories declared in this file will be deleted if you are using rsync strategy
    extras:
      enabled: true # don't turn it off :)
      directory: shared # your bloody share directory on the server
      vcs:
        enabled: true
        kind: git
        repository: git@gitlab.com:xxx/magento.git
        branch: staging # change if you are using different branch, obviously
        remote: origin # looks at above comment
        directory: repo
      rsync:
        enabled: false # set to true if you are using rsync-remote-cache strategy
        from: ./
        local: .rsync_cache
        remote: cached-copy
      magento:
        enabled: false # set to true if you are deploying magento application
        app_path: bin/magento # magento executable file
      shared: # most important section
        enabled: true # always set to enable
        linking_strategy: absolute # i do not suggest you change this to relative
        linked_files: # contains all files you want to keep every deploy
          - app/etc/env.php
          - var/.magento_cronjob_status
          - var/.setup_cronjob_status
          - var/.update_cronjob_status
          - sitemap.xml
        linked_folders: # directory contains dynamic assets like logs, product images, session you want to keep
          - pub/media
          - var/log
          - var/session
    releases:
      enabled: true # i recommend you turn on release mode :)
      max: 10 # how many release you want to keep in releases directory
      symlink: public # what is the symlink name of every release after deploy, it may be public, public_html... based on your webserver configurations
      directory: releases # the directory you keep releases, it will be created inside 'to' of 'deployment' section
    hosts:
      - bloody-production-machine # your ip, ssh aliases. i do recommend you use ssh aliases for the god sake.
    tasks:
      pre-deploy: # task will be execute before deploy
      on-deploy: # task will be execute while deploying
      post-release: # task will be execute right after a release created
        - composer/update
        - magento/staging-setup
        - magento/set-permissions
        - filesystem/link-shared-files
      post-deploy: # tasks will be execute right after all above tasks section complete

     

    So with the above configuration, Magallanes will work like this as sequence (all remote tasks in Magallanes will open new ssh connection):

    1. Check your SSH is fine and created extras >> directory on your server if it didn't exists.
    2. Clone a bare repository into 'shared' directory from extras >> vcs >> repository.
    3. If releases >> enabled == true, then create new directory on your server named releases >> directory.
    4. Using git archive from bare repository to a new directory in releases >> directory >> unix_time
    5. Check tasks section and execute all tasks inside post-release
    6. If no error, then created new symlink in your deployment >> to under name releases >> symlink
    7. Done.

    If you need further information, please let me know.

    Sorry for my bloody English.

    • Like 1
  6. Hello everyone,

    I didn't play much with ProcessWire, my project has just started and I have no experience about how ProcessWire will be when go live. I'm using a tool to deploy my code automatically to production server after build passed (Magallanes).

    Please give me the answers for the questions below:

    • Which folders/files will be created by the application after installed ?
    • Which folders/files usually be overwritten?
    • Which folders/files using as cache, as session, as temporary?
    • Do ProcessWire supports logs? If so, where is it?
    • Which files are using as configuration files? Should I concern about template configuration files?

    Any help would be appreciated.

    Thank you!

    • Like 1
  7. Thanks to @mr-fan and the others :)

    It's more clearly to me now, so I may have Categories/Category, Books/Book but Chapter will standalone?

    I also taking a note and stick it into my screen: "Pages is everything in PW".

     

    Thank you (all of you and yes you) once again,

    I'm trying it right now and will come back soon :)

    • Like 1
  8. Hi @szabesz,

    As my understanding, you recommend me to use one-to-many to solve relationship in my case, right? But there are two things confusing me right now:

    • How does Chapters/Chapter make sense in my case?
    • Because there are thoudsand of chapters for a book, so if I use relationship like Chapters/Chapter, it mean one "Chapters" template contains million of "Chapter" templates. How can I living among them?

    Quite confused to me :(

  9. 3 hours ago, LostKobrakai said:

    If you've control over the server there's also mosh, which is a more stable alternative to ssh with mobile connections in mind.

    Actually, mobile shell is what I'm using at this moment not mine script. But mosh is in heavily development and it denied to work with mouse scroll then in few hosting providers, they doesn't allow you to white list port range (scaleway.com for example).

    That is why I have to write this to use it in few case. Like SSH tunneling or NAT....

  10. Hello there,

    If you are the guy who is living in terminal like me, then I assume that you will love this script :) If you usually SSH-ing to servers, then I believe you used to have headache when you are trying to keep your SSH sessions alive.

    Most of them can resolve by adding this to your ~/.ssh/config

    Host *
      ServerAliveInterval 60

    But if you are working in a company or you are connecting to Interet through VPN/Proxy, then your SSH sessions are really unstable. Like this:

    Write failed: broken pipe
    packet_write_wait connection to: XXXXX

     

    To use this script, you can download the attach file below and move it to /usr/local/bin or /usr/bin or ~/bin or just simple put it anywhere you want.

    I assume that you will put it into /usr/local/bin, then you can use autossh command globally.

     

    USAGE

    It's simple, you can just run autossh alone or with parameters

    autossh your_user@your_server_ip

     

    Below is the whole script

     

    #!/bin/bash
    # ------------------------------------------------------------------------------
    # FILE: autossh
    # DESCRIPTION: This is an SSH-D proxy with auto-reconnect on disconnect
    # AUTHOR: Hector Nguyen (hectornguyen at octopius dot com)
    # VERSION: 1.0.0
    # ------------------------------------------------------------------------------
    VERSION="1.0.0"
    GITHUB="https://github.com/hectornguyen/autossh"
    AUTHOR="Hector Nguyen"
    SCRIPT=${0##*/}
    IFS=$'\n'
    ALIVE=0
    HISTFILE="$HOME/.autossh.history"
    
    # Use colors, but only if connected to a terminal, and that terminal supports them.
    if which tput >/dev/null 2>&1; then
      ncolors=$(tput colors)
    fi
    if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then
      RED="$(tput setaf 1)"
      GREEN="$(tput setaf 2)"
      YELLOW="$(tput setaf 3)"
      BLUE="$(tput setaf 4)"
      BOLD="$(tput bold)"
      NORMAL="$(tput sgr0)"
    else
      RED=""
      GREEN=""
      YELLOW=""
      BLUE=""
      BOLD=""
      NORMAL=""
    fi
    
    # Progress or something
    start_progress()
    {
      while true
      do
        echo -ne "#"
        sleep 1
      done
    }
    
    quick_progress()
    {
      while true
      do
        echo -ne "#"
        sleep .033
      done
    }
    
    long_progress()
    {
      while true
      do
        echo -ne "#"
        sleep 3
      done
    }
    
    dot_progress()
    {
      for i in {1..100}; do
        printf "." $i -1 $i
        sleep .033
      done
      echo_c green " 100%{$NORMAL}"
      sleep 1
    }
    
    stop_progress()
    {
      kill $1
      wait $1 2>/dev/null
      echo -en "\n"
    }
    
    # Case-insensitive for regex matching
    shopt -s nocasematch
    
    # Prepare history mode
    set -i
    history -c
    history -r
    
    # Input method
    get_input()
    {
      read -e -p "${BLUE}$1${NORMAL}" "$2"
      history -s "${!2}"
    }
    
    # Echo in bold
    echo_b()
    {
      if [ "$1" = "-e" ]; then
        echo -e "${BOLD}$2${NORMAL}"
      else
        echo "${BOLD}$1${NORMAL}"
      fi
    }
    
    # Echo in colour
    echo_c()
    {
      case "$1" in
        red | r | -red | -r | --red | --r ) echo "${RED}$2${NORMAL}" ;;
        green | g | -green | -g | --green | --g ) echo "${GREEN}$2${NORMAL}" ;;
        blue | b | -blue | -b | --blue | --b ) echo "${BLUE}$2${NORMAL}" ;;
        yellow | y | -yellow | -y | --yellow | --y ) echo "${YELLOW}$2${NORMAL}" ;;
        * ) echo "$(BOLD)$2$(RESET)" ;;
      esac
    }
    
    # Get data from parameters
    if [[ ! -n "$remote_param" && -n "$1" ]]; then
        remote_param="$1"
        remote_user="${remote_param%%@*}"
        remote_ip="${remote_param##*@}"
    fi
    
    # Get input data and save to history
    save_input()
    {
      if [[ ! -n "$remote_user" && ! -n "$1" ]]; then
        while get_input "SSH Username > " remote_user; do
          case ${remote_user%% *} in
            * )
                if [ -n "$remote_user" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$remote_ip" && ! -n "$1" ]]; then
        while get_input "SSH Alias/IP-address > " remote_ip; do
          case ${remote_ip%% *} in
            * )
                if [ -n "$remote_ip" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
    }
    
    # Infinitie loop to keep connecting
    auto_connect()
    {
      while true; do
        exist=`ps aux | grep "$remote_user@$remote_ip" | grep 22`
        if test -n "$exist"
        then
          if test $ALIVE -eq 0
          then
            echo_c yellow "I'm alive since $(date)"
          fi
          ALIVE=1
        else
          ALIVE=0
          echo_c red "I'm dead... God is bringing me back..."
          clear
          printf "${GREEN}Connecting: "
          for i in {1..100}; do
            printf "." $i -1 $i
            sleep .033
          done
          echo_c green " 100%${NORMAL}"
          sleep 1
          clear
          ssh $remote_user@$remote_ip
        fi
        sleep 1
      done
    }
    
    main()
    {
      save_input
      auto_connect
    }
    
    main

     

    Hope this helps.

    autossh.sh

    • Like 4
  11. On August 17, 2016 at 0:02 AM, OrganizedFellow said:

    WHOA!

     

    I'm trying to figure out what all this does :)

     

    #BashNewb

    Let me explain it quickly,

    Your original bash script need to be modify whenever you want to backup other database or same database but different server or the damn path. I really do hate it, it will take me over than 30 seconds. That is why I wrote this one to automate that.

    The purpose of this script is let you enter all needed information to the shell (I also added history to this script, then next time if you still using credential, just press up or down to navigate among them) instead of open the script with vim editor, change them one by one.

    Of course it is slower if you have only one server, only one database need to backup. But if you have more than one server or database, I should suggest you use my script :) Good news is, I kept your workflow then you have no worries at all.

    Cheers!

    • Like 4
  12. I added interactive mode into your script, hope it helped to someone is too lazy like me :)

    #!/bin/bash
    #----------------------------------------------
    # INTERACTIVE REMOTE DATABASE DUMP SCRIPT
    #----------------------------------------------
    #  This work is licensed under a Creative Commons 
    #  Attribution-ShareAlike 3.0 Unported License;
    #  see http://creativecommons.org/licenses/by-sa/3.0/ 
    #  for more information.
    #----------------------------------------------
    SCRIPT=${0##*/}
    IFS=$'\n'
    HISTFILE="$HOME/.remotedump.history"
    
    # Use colors, but only if connected to a terminal, and that terminal supports them.
    if which tput >/dev/null 2>&1; then
      ncolors=$(tput colors)
    fi
    if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then
      RED="$(tput setaf 1)"
      GREEN="$(tput setaf 2)"
      YELLOW="$(tput setaf 3)"
      BLUE="$(tput setaf 4)"
      BOLD="$(tput bold)"
      NORMAL="$(tput sgr0)"
    else
      RED=""
      GREEN=""
      YELLOW=""
      BLUE=""
      BOLD=""
      NORMAL=""
    fi
    
    # Case-insensitive for regex matching
    shopt -s nocasematch
    
    # Prepare history mode
    set -i
    history -c
    history -r
    
    # Input method text
    get_input()
    {
      read -e -p "${BLUE}$1${NORMAL}" "$2"
      history -s "${!2}"
    }
    
    # Input method password
    get_input_pw()
    {
      read -s -p "${BLUE}$1${NORMAL}" "$2"
      history -s "${!2}"
    }
    
    # Echo in bold
    echo_b()
    {
      if [ "$1" = "-e" ]; then
        echo -e "${BOLD}$2${NORMAL}"
      else
        echo "${BOLD}$1${NORMAL}"
      fi
    }
    
    # Echo in colour
    echo_c()
    {
      case "$1" in
        red | r | -red | -r | --red | --r ) echo "${RED}$2${NORMAL}" ;;
        green | g | -green | -g | --green | --g ) echo "${GREEN}$2${NORMAL}" ;;
        blue | b | -blue | -b | --blue | --b ) echo "${BLUE}$2${NORMAL}" ;;
        yellow | y | -yellow | -y | --yellow | --y ) echo "${YELLOW}$2${NORMAL}" ;;
        * ) echo "$(BOLD)$2$(RESET)" ;;
      esac
    }
    
    # Get input data and save to history
    save_input()
    {
      if [[ ! -n "$local_dir" ]]; then
        while get_input "Local DB Directory > " local_dir; do
          case ${local_dir%% *} in
            * )
                if [ -n "$local_dir" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$remote_user" ]]; then
        while get_input "SSH Username > " remote_user; do
          case ${remote_user%% *} in
            * )
                if [ -n "$remote_user" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$remote_ip" ]]; then
        while get_input "SSH Aliases/IP-address > " remote_ip; do
          case ${remote_ip%% *} in
            * )
                if [ -n "$remote_ip" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$remote_dir" ]]; then
        while get_input "Remote Backup Directory > " local_dir; do
          case ${remote_dir%% *} in
            * )
                if [ -n "$remote_dir" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$db_user" ]]; then
        while get_input "DB Username > " local_dir; do
          case ${db_user%% *} in
            * )
                if [ -n "$db_user" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$db_password" ]]; then
        while get_input_pw "DB Password > " local_dir; do
          case ${db_password%% *} in
            * )
                if [ -n "$db_password" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
      if [[ ! -n "$db_name" ]]; then
        while get_input "DB Name > " local_dir; do
          case ${db_name%% *} in
            * )
                if [ -n "$db_name" ]; then
                  break
                else
                  continue
                fi
            ;;
          esac
        done
      fi
    }
    
    change_pwd_rsync()
    {
      ## CD INTO LOCAL WORKING DIRECTORY
      ## this is where I keep my local dump SQL files.
      ## the most recent one is always named dump.sql
      cd "$local_dir"
      
      ## RSYNC LATEST DUMP.SQL FILE TO REMOTE SERVER
      rsync -avzP dump.sql $remote_user@$remote_ip:$remote_dir
      wait
    }
    
    remote_dump()
    {
      ## SSH INTO SERVER
      ssh $remote_user@$remote_ip /bin/bash << EOF
        echo "**************************";
        echo "** Connected to remote. **"
        echo "**************************";
        echo "";
    
        ## CD INTO REMOTE WORKING NON-PUBLIC DIRECTORY
        ## where the dump.sql file was rsynced to
        cd "$remote_dir"
        wait
        sleep 1
        
        ## RUN MYSQLDUMP COMMAND
        ## save the SQL with date stamp
        mysqldump --host=localhost --user=$db_user --password=$db_password $db_name > `date +%Y-%m-%d`.sql;
        echo "***************************************";
        echo "** `date +%Y-%m-%d`.SQL has been imported. **"
        echo "***************************************";
        echo "";
        wait
        sleep 1
    
        ## IMPORT DUMP.SQL COMMAND
        mysql --host=localhost --user=$db_user --password=$db_password $db_name < dump.sql;
        echo "*********************************";
        echo "** DUMP.SQL has been imported. **"
        echo "*********************************";
        echo "";
        wait
        sleep 1
    
        ## REMOVE DUMP.SQL FILE
        rm dump.sql
        echo "********************************";
        echo "** DUMP.SQL has been removed. **"
        echo "********************************";
        exit
      EOF
    }
    
    main()
    {
      save_input
      change_pwd_rsync
      remote_dump
    }
    
    main

     

    • Like 7
  13. Hello @szabesz,

    Thanks for replying to my topic, as far as my understanding now. All fields are belong to a template, so I can add as much as possible fields to the template right? And in my case, I may have 2 templates, one is book and the other is chapter. I may have template to manage the categories too, because book should be manage under categories.

    So it will look like this in my imagine:

    CATEGORY TEMPLATE

    Category
    ├── Title
    ├── Static Block
    └── Thumbnail

    BOOK TEMPLATE

    Book
    ├── Title
    ├── Author
    ├── Publisher
    ├── ISBN
    ├── Type (traditional or comic)
    ├── Rating
    └── Thumbnail

    CHAPTER TEMPLATE

    Chapter
    ├── No (number of chapter)
    ├── Title
    ├── Content
    ├── Rating (This will affect to rating of the whole book)
    └── Thumbnail

     

    Everything in PW managed by Pages. So for example,

    I created a new page with name "Hacking & Security" (Category), another one page is child of "Hacking & Security" name "The Art of Deception" and another one is child of previous name "Chapter 1: Introduction" ?

    It also mean I have to create all chapters of that book as a child page of "The Art of Deception" ?

     

    Question 1: Is it possible of creating child of child as I demonstrate above? (Category > Book > Chapter 1 ... N)

    Question 2: Is it possible to force user must create content of book (at least 1 chapter) whenever they create a new book? I'm looking for a way to create a new admin page to manage books which separate to default "Pages" section, can I ?

    Question 3: Is it possible to set book page status like Draft, Publish, Under Review...etc ? Is that a field?

    Question 4: I checked in PW template, and I saw something like $page->title, how is that possible? PW can use name of fields in template without defining it?

     

    Thank you once again,

    Have a good day.

  14. Hello PW-ers,

    I'm a newbie to this CMF, and totally fallen in love with this sexy framework. I knew that ProcessWire documented very well, but nothing is faster to handle a new framework than building a new project.

    The application (reading online book) I want to build will be same like normal CMS but,

    It should have fields / entities like this:

    • Books
    • Books > Chapters (a book may have over thoudsand of chapters)
    • Authors
    • Publishers
    • Posters (Book may be uploaded by any registered members)
    • Images (Thumbnail, it should be picked and uploaded via input type=files)
    • Rating (a book can be rated by readers, like any forum software rating system, in social networking world this aka Likes)
    • Date created / updated

    Please suggest/guide me how to create and maps fields (or what is this call) in ProcessWire like above description.

    I still don't understand the connection between fields (in backend) between variables (how to use it) in template, please give me few example if possible.

     

    Any help would be appreciated,

    Hector.

×
×
  • Create New...