Jump to content

gornycreative

Members
  • Posts

    352
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by gornycreative

  1. 11 hours ago, wbmnfktr said:

    In terms of writings and suggestions... GPT4 is already quite good as this. Feed it your last 10+ books you read and you will get awesome results. Depending on your input and your prompt... you might be impressed.

    This is so cool, and I actually have used a much older series of sites for this since - oh gosh the early 2000s?

    https://www.gnod.com/

    Gnod music, books and movies were just an amazing way to find new artists. The application still works, many visitors continue to contribute every day.

    image.thumb.png.1c5028ca96f7418928050b22a0e2b42c.png

  2. Once LLM models can be personally trained on home networks, we can develop the ability to train on personal data and education, resources and inspiration. Some LLM can already be run on home machine with a modest Nvidia 4090 😆 but other models are more generic, running on any 24GB+ GPU.

    Imagine being able to feed all of August Rodin's notebooks, sketchbooks and studies into an AI and the design cues that could produce. Or DaVinci.

    Who owns the rights to this, or is it public domain? The rights aspect of models is going to be the next big war. I DON'T believe people are going to want universal and unopinionated models for their own work. I don't think those are as useful for maintaining the creative integrity of a person in their field. It is great to be able to consult ai librarians - models trained on a particular body of knowledge.

    For example, I would love to be able to consult an AI trained on all of the protected content published by the Journal for the Study of Old Testament. It would radically transform academia to be able to have research assistants that could be leased for a small fee to provide reliable synopsis and citations on the vast volumes of journals within a given field.

    Or even reinvent Reader's Digest, training a model with the text of all popular novels and developing a bot that could be consulted to read summaries of popular novels that are shorter or even adaptations of novels for different age groups of language skill levels.

    What if we could read a Dostoyevsky novel in an abridged version - with an optional character diagram - at an 8th grade level - for someone with ESL.

    What I am interested in, and what many bosses are interested in - the ability to train an AI model that is like us and retains our style, experience and knowledge base. Lots of bosses would like to be able to hire as many of themselves as they could - particularly when growth and innovation are important. Not everyone wants drones.

    Someone who could produce draft output that we curate - this is the most promising edge of AI, and the one that frightens a lot of people because many mid-level folks are where they are today because they have copied or stolen other people's work.

    Once you have a digital twin trained on your own work inspiration, background and style, you should be able to lease your twin to do certain tasks. Because trust of training sources is going to be important, and in the end there will be models for all of us - how are those rights managed? When you use an engine to train and build out a model based on your preferences, likes, background, interests and expertise, do you own the rights to what has been trained? Or do those rights belong to the content owners used to train to model? The answers to these questions frighten me more than the technology itself - just as medical patents infringe on bodily integrity on a regular basis in the US once you get involved with product trials. The idea a company can own a part of your biology because you get a treatment is disturbing.

    The patents and rights issues related to medical ethics and personal property rights are still unresolved - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3440234/

    This will certainly apply to the digitization of connectome neural paths.

    A cool primer on connectomes, if you haven't heard of this:

    As an artist, who wouldn't love to be able to train their own personal visual training model based solely on their own collection of work - studies, materials, style, etc. - and then put that 'virtual apprentice' to work.

    I think these global singularity-type models are great for those who can afford to not discriminate for quality and taste of output. It gets the job done when the specifics aren't important, or when you are willing to compromise your vision for something that serendipity brings along that is superior. But for those who have a more precise vision with lower tolerances, the general training models are not lucky enough to work consistently.

    "The work of a master of their craft who is also a professional is reliable, repeatable, and responsible." - R. Fripp

    AI is not this - yet - for anything. But all the ingredients are coming together nicely.

    On 12/17/2022 at 2:20 AM, rushy said:

    ...what happened with the Google Alpha Zero program using neural networks. In board games like Chess and Go...

    An incredible documentary on this - AlphaGo - https://www.imdb.com/title/tt6700846/

    Well worth a watch.

    • Like 2
  3. Okay, I got a proper answer once I switched to an account with credits. Awesome.

    {
        "request": {
            "model": "gpt-3.5-turbo",
            "messages": [
                {
                    "role": "user",
                    "content": "This is a test for ChatGPT. Do you hear me?"
                }
            ]
        },
        "response": {
            "id": "chatcmpl-...",
            "object": "chat.completion",
            "created": 1679504670,
            "model": "gpt-3.5-turbo-0301",
            "usage": {
                "prompt_tokens": 21,
                "completion_tokens": 18,
                "total_tokens": 39
            },
            "choices": [
                {
                    "message": {
                        "role": "assistant",
                        "content": "\n\nYes, I can hear you loud and clear! How can I assist you today?"
                    },
                    "finish_reason": "stop",
                    "index": 0
                }
            ]
        }
    }

     

    • Like 2
  4. 59 minutes ago, wbmnfktr said:

    The new version looks good and while testing the settings worked out pretty well, I now get this error:

    chatquota.png.55b687531ae19afc190082572a0ce464.png

    I checked my account and quotas are fine. Even billing settings are enabled/set.

    I meant I got this result, not the developer's result..

    • Like 1
  5. 15 minutes ago, bernhard said:

    Having everything in one huge migrate.php does not only have the drawback of getting messy quickly. It does also mean that your migrations get slow, because whenever you change anything in that file all migrations will run, no matter if they are necessary or not. If you have everything in separate files on the other hand you get cleaner code, easier to understand chunks and RM will only trigger the migrations that have changes since the last page load, which makes page loads usually be ~1s here even when migrations run.

    Good to know. I haven't really looked too deeply into the overhead costs for larger migrations, but that makes sense.

  6. ChatGPT is soon to replace Google, Stackoverflow and Quora for answering questions like these.

    I have used it to either confirm or throw speculation on topics and directions of research. I have asked it very technical academic questions in certain fields and it has given balanced views for just about everything, including outsider views and newer untested views on certain problems. Very fun. It has even been able to tell me when a problem that I thought still existed has already been resolutely solved. Such a time saver!

    My main comment, though, is that the marriage of page classes and migrations is truly groundbreaking in terms of modular design and in particular the ability to bring over single files that can instantly put in (or remove) scaffolding for demos. This has been AWESOME for live brainstorming and troubleshooting or providing clients with examples of how extensible the system is and how quickly modifications and improvements to functionality can be built out.

    This has made me more excited about using RockMigrations - I was a bit on the fence when the automation was centralized in migrate.php because of how unwieldly automation scripts can get in other applications I use - but bringing things that involved scaffolding into their respective page classes allow me to really just use migrate.php for mod installations, configuration, general hooks and other core modification processes.

    This is way easier to navigate, share and especially train on. This method is very easy to coach new devs on also.

    • Like 3
  7. Looking into it further it seems that CKeditor had an old settings js path on that field and as a result it wasn't loading the editor but it was further down the editing form so I didn't see it.

    Would that prevent the body field contents from loading?

    When I saved to fix the title issue, would an empty body tag remove the row from the fields table?

    I didn't think to see if the editor successfully loaded. It didn't occur to me that the value of the form field was empty if the editor failed to load, but I suppose that makes sense.

  8. Okay, now it gets weirder.

    I tried running an insert command to bring the old CKeditor data back.

    Now it won't display the data at all on the back-end (the field textarea even after I switch it to textarea is completely blank) but the data exists in the row and I can see it on the front-end.

    Any hints as to what is going on would be helpful - running 3.0.213 (latest dev)

    Thanks,

    J

  9. EDIT: I changed the title to reflect the ultimate question...

    I have run into a problem, and any suggestions or assistance as to how I can track down the issues that led up to this situation would be helpful.

    I had a multisite setup in draft and debug mode. One site had content for some time, others were just having structure built out.

    I usually ran upgrades from a central codebase for wire and modules as well - have not had any issues for years.

    The upgrade process at some point in a recent iteration failed. I should have made independent backups of all the site DBs before, but I've never had data rows disappear like this.

    After an upgrade, the field_body for a number of pages disappeared. The rows completely vanished. They were originally TinyMCE inputfields.

    It is likely that I added the CKEditor inputfield when it was released and removed the TinyMCE inputfield without changing the setting on those fields on that particular site. Is it possible that this caused those field_body rows to get deleted? Because no inputfield type would have been set?

    I also had issues with field_title data rows that included apostrophes that were unescaped. I had to manually query and fix those as they weren't showing up on the admin side - I'm guessing this has something to do with how text inputfields have been hardened recently.

    Luckily I have older backups and haven't completely lost the data, but I have often wondered about how core upgrades are done with processwire.

    Are there certain DB scripts that are automatically run when a version change is detected? It seems like the version changes just get updated without a lot of fanfare. I know on wordpress multisite it generally would need to rerun upgrade scripts for each site in the environment.

  10. Just something to note, a lot of core wire functions include hardcoded markup here, so hooking into __render is probably the cleanest way to address most of these replacements from a markup output stance - that's what I plan to do. But there's lots of little bits to check - the fa 4 icons appear all over the place. Certain places (like the installation pages) hardcode echo statements and so I think those will always need to be FA4 unless Ryan steps up to 6 completely.

    If you are just looking for a way to include fontawesome, I'm sure you know this already - but you've got RockAwesome in the modules library if you just want to be able to assign an icon from your library of choice to use on the front end - it supports whatever version you are able to link the lookup reference to in settings.

    https://processwire.com/modules/fieldtype-rock-awesome/

     

    • Like 2
  11. I'm actually working on something that allows for this in a limited capacity with my admin theme. I kickstarted with v5 back in the day and still subscribe so my solution will reach back to 4 5 and 6 and include pro support although I don't think I am going to support duotone unless I can find a good way to get theme colors in there. Haven't looked too deep into it.

    • Like 1
  12. I'm sorry I should have been clearer. I haven't tried your module yet, I was just trying to use the core method (which wasn't working for me) and I was asking if your extended module would perform in a way that the default setup did not.

    I will give it a try.

  13. I tried using the simple matrix_field.count>0 rule, but it didn't work.

    I didn't realize I needed to include the inputfieldset name also. e.g.

    matrix_wrapper_fieldset.matrix_field.count>0

    EDIT: Nope that didn't fix it. It just stopped showing anything.

    For some reason it is not recognizing the count of the matrix items.

    Is the data-show-if attribute in the HTML supposed to be escaped? e.g.

    data-show-if='meta_zone_1.count>0' data-required-if='meta_zone_1.count>0'

     

  14. This sounds like a lot of fun. It's amazing how complex logistics are now for web purchases, but also amazing that so much can be done with APIs and so much transparency can be provided to the customer (when needed). This reminds me that I need to rework a bunch of old FBA integration code... 

    • Like 1
  15. You know, I was looking into this as well for the opposite reason in that I have always used RepeaterMatrix to build up customizable content stacks but I found old posts talking about how PageTable was more efficient for this than Repeaters and so I've been sleuthing and unfortunately the road ends here? Maybe someone who has been here longer can chime in?

  16. @ryan have you ever worked with any products from cdata?

    https://www.cdata.com/apiserver/

    Not that you are addressing this issue, but if you come across older on-prem data sources or other weird solutions that need an API wrapper that you can configure with a pretty straightforward JSON configuration, their stuff works nicely.

    They also have interesting interfaces that allow you to configure API calls and behaviors with JSON settings to then get acted upon with SQL like a database.

    • Like 1
  17. Hi @bernhard

    I have archives of webfonts that are symlinked to /site/fonts/archive/ from a different path.

    When I pass the symlink paths to the $rfe->styles()->add method, the output on the webpage revert to the original full path of the symlink.

    For example. If I have an array of stylesheets:

    array
    0 => '/site/assets/fonts/archive/acherusgrotesque/stylesheet.css'
    1 => '/site/assets/fonts/archive/antonio/stylesheet.css'
    2 => '/home/xxxxxxxx/public_html/pw/site/templates/theme/src/less/uikit.theme.less'

    And I iterate through:

    foreach($style_array as $style) {
    	$rfe->styles()->add($style);
    }

    I get the following results:

      <!-- DEBUG enabled! You can disable it either via $config or use $rf->styles()->setOptions(['debug'=>false]) -->
      <!-- rockfrontend-styles-head -->
      <!-- loading /site-base/templates/theme/src/less/uikit.theme.less (_main.php:4) -->
      <!-- loading /site-base/templates/layouts/_global.less (_main.php:6) -->
      <!-- loading /site-base/templates/sections/footer.less (_main.php:6) -->
      <link href='/site-base/templates/bundle/head.css?m=1676340014' rel='stylesheet'><!-- LESS compiled by RockFrontend -->
      <link href='/home/xxxxxx/public_html/webfonts/acherusgrotesque/stylesheet.css' rel='stylesheet'><!-- _main.php:4 -->
      <link href='/home/xxxxxx/public_html/webfonts/antonio/stylesheet.css' rel='stylesheet'><!-- _main.php:4 -->

    The plain stylesheet paths/locations give back the files based on the original symlinks, and obviously these don't resolve properly.

    In the meantime I am going to work out a scheme to import (optional) their values in a the theme's LESS, but I figured I'd mention it in case there is a LESS parser setting you've included (or default setting) that is changing the output in this way.

×
×
  • Create New...