Jump to content

gornycreative

Members
  • Posts

    372
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by gornycreative

  1. Yes I copied from elsewhere in this thread but realized after I posted that it wasn't an actual solution. I was under the impression the field always stored values in the db as a 32-bit hex regardless of the output method?
  2. I'm using this as a module configuration inputfield in this case, not as a site facing Fieldtype. An admin theming module. So the LESS is being applied on the admin side.
  3. Hi, I'm using this in a module config and want to call the value up and pass it through to LESS within the module, but the default format doesn't comply with LESS formatting so - since I don't need alpha for my application is the best method to do as suggested above: $color_out = '#' . ltrim( $color_in, '#ff' ); Or is there a more elegant way to handle this? EDIT Okay obviously not that exactly but pulling the rightmost 6 chars and prefixing a #.
  4. A humorous stackexchange question page that has collected many answers over the years on stacking ternary operators is here: https://stackoverflow.com/questions/5235632/stacking-multiple-ternary-operators-in-php I agree this may be a good candidate for switch, but it's funny to see how many ways this cat got skinned over the years.
  5. I have been digging into the oembed fields/essence and this as well. First exposure to this feature and so cool to see where it is used. If you run Plex, for example, and you use youtube-dl to grab audio from a music video, the comment on the exported audio file includes the youtube URL and when you load the music file into Plex, Plex pulls the oembed data and shows you the video description for the song! I don't know if this is still being worked on, but I like this implementation for youtube so I will fork and add lite-vimeo-embed support if necessary.
  6. Just curious if this module is still active/stable? It doesn't appear in the modules directory - I only found it through searching the forum. Thanks, J
  7. I will be using this hopefully going forward instead of the typical import/export process for my baseline profile, which takes into account workflows that are more familiar to desktop publishers and allows for traditional task delegation - photo, content, layout, section editors, etc. If there is interest in this, I will provide at least the basic core structure and page classes - might be a good advanced example of how things can be done for publishing projects. I don't have an exact ETA, I'm still working on AdminStyleChroma which is my admin theme color manager and I'm also working on a Color Thief image field implementation - both of which will have official threads and releases and when they are ready.
  8. I think his fork is still being run through - the last issue he was addressing was nested repeater matrices and including the RTE fields in repeaters - which I believe was resolved. I haven't run into any issues with his fork in my testing.
  9. Cool, yes adding the empty parameter was ?
  10. Small thing, I was getting an error without any custom options defined. As is, the line where you set up the arguments: public function ___addFootnotes($str, $options = [], $field = "") { This results in an error if you do not pass an additional options array - e.g. just a $str and $field. An error on line 83 array_merge results - second argument is not an array. I didn't pull the repo via git so I can't do a PR handily, but reversing the parameter order resolves it. public function ___addFootnotes($str, $field = "", $options = []) {
  11. Yes I just noticed it also - the columns are correct and Tracy picks up the entries, but when I click a log page name I get the error message. Can confirm all files exist. PW 3.0.214 PHP 8.1
  12. Can I throw another log onto this fire? Today I installed a local instance of oobabooga and vicuna. It looks like text-generation-webui has an API mode that can be set to listen for JSON prompt requests! It would be great to either fork PromptGPT or develop an option to select alternative API endpoints, etc. I used this tutorial to guide the install - with the exception I also needed to install NVIDIA CUDA toolkit to get access to CUDA cores from pytorch.
  13. This is what I remember you saying in the 'adding RM to existing products' video but I know as this is a work in progress things could have changed. I'm in the middle of applying gebeer's Repeater Matrix migration methods to something I am working on but when I am done I will take a look at the dev branch and try some things.
  14. This did work. Does it make sense to add a boolean flag here to allow for the previous roles to be retained unless it is set? I'm looking at the setTemplateAccess and thinking why not add a similar boolean here? Just food for thought.
  15. Hi @bernhard I'm trying to include custom roles, permissions and access via the migrate() method array setup. Looking at the source this *seems* like it should work, but for some reason I get an error: Method RockMigrations::setRolePermissions does not exist or is not callable in this context $rm->migrate([ "roles" => [ 'copy_editor' => [ 'permissions' => [ 'page-view', 'page-edit', 'page-delete', 'page-edit-front', 'page-edit-recent', 'page-move', 'page-sort', 'comments-manager', 'profile-edit', ], 'access' => [ 'home' => [ 'view', ], ], ], 'layout_editor' => [ 'permissions' => [ 'page-view', 'page-edit', 'page-delete', 'page-edit-recent', 'page-move', 'page-sort', 'page-template', 'logs-view', 'profile-edit', ], 'access' => [ 'home' => [ 'view', 'edit', ], ], ], ], ]); // I include fields and templates and other things later - I will likely move roles AFTER templates once this is working. Is there something I am missing here?
  16. I was trying to automate using $rm->installModule("LanguageSupport"); But I found that when I ran the migrate script some processes installed while others did not and got errors tied to not being able to rename page #0, etc. I tried playing around with manually prempting process installs but then I'd get errors around templates or fields already existing. I can manually install LanguageSupport and of course go through all the follow-ups that I need. I just am wondering if there is an order/trick to getting it to work.
  17. Hey @kongondo I just realized you were redoing this - wow! If I could suggest two places to look for inspiration from the UI side. WP is nigh unusable out of the box for a number of clients in terms of both media and filtering and the two favorite plugins I deploy to handle these have always been Admin Columns Pro on the filtering side and WP Media Folder on the media management side: https://www.admincolumns.com/what-is-admin-columns/ Admin Columns Pro is just pure magic for clients. Being able to edit-in-place on the post/page list, add images from the grid, batch commands from the grid - this plugin alone does so much for wordpress it's a no-brainer first install. But the filtering columns at the top of the list that conform to custom posts and pages (like if you are using ACF or Pods) is really intuitive and awesome AJAX. Like I said, clients love how much it streamlines their work. I personally have gotten used to listers, but the experience of listers on Processwire is a lot more like NSP Code's advanced ordering WP plugin where you are able to filter and sort hierarchical lists and then create set list pages for them: https://www.nsp-code.com/premium-plugins/ This sort of thing is fine for developers to put together special lists - and I actually use lister pro to provide clients with worklists that detect content that is incomplete (missing author, card image, unpublished, too short, etc.) but as other have mentioned, the lister interface for some reason is hard to grasp. I always had the same problem trying to train people on using Advanced Post Types Order. https://www.joomunited.com/wordpress-products/wp-media-folder I tried a bunch of media managers for wordpress and this is the one that folks seemed to like the best, and I like that it produces its own metalayer without disturbing the file system away - so if for some reason they uninstall it everything revert to the big perpetual stew that is WP media. Not saying that you need to duplicate anything here, but I saw others posting about things that clients have liked from a UX perspective and so here ya go!
  18. YES this struck me too - how much goes into directing and feeding the training. If you watch Corridor Digital on youtube they've gone through this exploration for image training to insert their own images into stable diffusion and although I think the process is more streamlined, you get a sense of just how much work goes into preparing your training data - not for the faint of heart.
  19. gpt4all looks amazing, I was watching some video on it last night. There are a number of KB fed GPT driven chatbots appearing on the market. These tools combined with NLP tools from AWS and call routing node with Twilio could eventually create phone accessible intelligent chatbots that could serve as digital virtual assistants for your office, or as a natural language support call bot. I can't really get over how exciting this potential is. Soon these bots will be earning proficiency licenses and will be able to answer diagnostic and therapeutic calls - which is going to be its own problem. If you have not looked at the suite of Amazon's NLP stack, it is really impressive. Lex (process audio text requests) -> Comprehend (tokenized/tag intents, tone, and pass through) -> Transcribe -> Translate (English Output) -> Prompt GPT (Live question request and intent. tone notes) -> Polly (Process GPT response as audio reply) All of this could be driven by Twilio branching nodes to even only focus on GPT instances that are trained in a given department - ultimately this will be a neat way to save on processing resources if you have specialty nodes that can refer to each other when a question goes beyond their expertise or more general help is needed. You could even branch these dialogs to a logged live transcript that could alert a live person if a frustrated or angry sentiment flag gets passed. I'm sure it is only a matter of time before these services offer GPT integration services themselves. I hope open versions will always remain, even if the 'Community Editions' lag behind the commercial version.
  20. This is so cool, and I actually have used a much older series of sites for this since - oh gosh the early 2000s? https://www.gnod.com/ Gnod music, books and movies were just an amazing way to find new artists. The application still works, many visitors continue to contribute every day.
  21. If folks are looking to add the full gamut of emojis, be sure to check out https://emojipedia.org/ You have to scroll down a bit on more complex multitone emojis, but shortcodes are also included.
  22. Once LLM models can be personally trained on home networks, we can develop the ability to train on personal data and education, resources and inspiration. Some LLM can already be run on home machine with a modest Nvidia 4090 ? but other models are more generic, running on any 24GB+ GPU. Imagine being able to feed all of August Rodin's notebooks, sketchbooks and studies into an AI and the design cues that could produce. Or DaVinci. Who owns the rights to this, or is it public domain? The rights aspect of models is going to be the next big war. I DON'T believe people are going to want universal and unopinionated models for their own work. I don't think those are as useful for maintaining the creative integrity of a person in their field. It is great to be able to consult ai librarians - models trained on a particular body of knowledge. For example, I would love to be able to consult an AI trained on all of the protected content published by the Journal for the Study of Old Testament. It would radically transform academia to be able to have research assistants that could be leased for a small fee to provide reliable synopsis and citations on the vast volumes of journals within a given field. Or even reinvent Reader's Digest, training a model with the text of all popular novels and developing a bot that could be consulted to read summaries of popular novels that are shorter or even adaptations of novels for different age groups of language skill levels. What if we could read a Dostoyevsky novel in an abridged version - with an optional character diagram - at an 8th grade level - for someone with ESL. What I am interested in, and what many bosses are interested in - the ability to train an AI model that is like us and retains our style, experience and knowledge base. Lots of bosses would like to be able to hire as many of themselves as they could - particularly when growth and innovation are important. Not everyone wants drones. Someone who could produce draft output that we curate - this is the most promising edge of AI, and the one that frightens a lot of people because many mid-level folks are where they are today because they have copied or stolen other people's work. Once you have a digital twin trained on your own work inspiration, background and style, you should be able to lease your twin to do certain tasks. Because trust of training sources is going to be important, and in the end there will be models for all of us - how are those rights managed? When you use an engine to train and build out a model based on your preferences, likes, background, interests and expertise, do you own the rights to what has been trained? Or do those rights belong to the content owners used to train to model? The answers to these questions frighten me more than the technology itself - just as medical patents infringe on bodily integrity on a regular basis in the US once you get involved with product trials. The idea a company can own a part of your biology because you get a treatment is disturbing. The patents and rights issues related to medical ethics and personal property rights are still unresolved - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3440234/ This will certainly apply to the digitization of connectome neural paths. A cool primer on connectomes, if you haven't heard of this: As an artist, who wouldn't love to be able to train their own personal visual training model based solely on their own collection of work - studies, materials, style, etc. - and then put that 'virtual apprentice' to work. I think these global singularity-type models are great for those who can afford to not discriminate for quality and taste of output. It gets the job done when the specifics aren't important, or when you are willing to compromise your vision for something that serendipity brings along that is superior. But for those who have a more precise vision with lower tolerances, the general training models are not lucky enough to work consistently. "The work of a master of their craft who is also a professional is reliable, repeatable, and responsible." - R. Fripp AI is not this - yet - for anything. But all the ingredients are coming together nicely. An incredible documentary on this - AlphaGo - https://www.imdb.com/title/tt6700846/ Well worth a watch.
  23. Okay, I got a proper answer once I switched to an account with credits. Awesome. { "request": { "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "This is a test for ChatGPT. Do you hear me?" } ] }, "response": { "id": "chatcmpl-...", "object": "chat.completion", "created": 1679504670, "model": "gpt-3.5-turbo-0301", "usage": { "prompt_tokens": 21, "completion_tokens": 18, "total_tokens": 39 }, "choices": [ { "message": { "role": "assistant", "content": "\n\nYes, I can hear you loud and clear! How can I assist you today?" }, "finish_reason": "stop", "index": 0 } ] } }
  24. I meant I got this result, not the developer's result..
×
×
  • Create New...