Jump to content

Free non commercial license - MySQL Compare from Redgate

Recommended Posts

Redgate is giving out non commercial licenses for its MySQL Compare and MySQL Data Compare tool.

I've used their SQL Server Compare tools and the ToolBelt extensively many years ago, and it saved my back side time and time again.
I only happen to come across it because I was looking out for a MySQL Compare tool to work out the difference between my Test & Live Servers.

The unfortunate caveat is that it only runs on Windows ...... perhaps it will still be of use to someone.
I will try running it on Parallels and see if it can access a MySQL Instance running on the OS X Parent .... 


  • Like 4

Share this post

Link to post
Share on other sites

Thanks for the info! I will take a look. I'm also a Parallels user which I normally use for IE/Edge testing purposes only.

For those interested, I succesfully use SquidMan to connect to my MAMP Pro's server on my Mac from the Parallels virtual machine and/or from other (mobile) devices on my network (actually, all requests will be routed through SquidMan):

Set up a permanent local IP for your local dev computer, for instance or take a note of the one assigned to it by DHCP.

Open SquidMan and access the application preferences:
    - In the "General" tab under "HTTP port" enter "8080"
    - Next, in the "Clients" tab enter a range of allowed devices.
        - example: and/or etc...
    - In the "Template" tab, comment out "http_access deny to_localhost", like this:
        # protect web apps running on the proxy host from external users
        # http_access deny to_localhost
    - Save changes.

Launch your web server and SquidMan.app. Your local web sites are now open to visitors.

On a Mac you'll need to adjust your network settings to use this proxy for HTTP traffic:
    - Open the System Preferences panel, click the Network icon.
    - Create a new 'Location' that uses the Squid proxy.
    - Set the HTTP proxy to be "localhost", port "8080" and turn on "Web proxy (HTTP)" on the "Proxies" tab.
    - Next click on "Apply Now".

On your Parallels virtual machineremote (mobile) device connected to your local network you also need to use the proxy:
    - Go to your network settings.
    - Optionally, if it is possible to use profiles: create a new Profile that uses the Squid proxy.
    - Set the HTTP proxy to be your local dev machine's IP with port "8080".

For the Parallels virtual machine set networking to "Bridged Network".

Edited by szabesz
remote device proxy setting must point to your local dev machine's IP
  • Like 2

Share this post

Link to post
Share on other sites

I had to realize that "sharing" my Mac's webserver over the network the above mentioned way is for accessing the local websites, whereas connecting directly to MySQL from the Parallels Windows machine is a different matter... However, I just could not figure out how to do it, so if you succeed, please share.

Edited by szabesz

Share this post

Link to post
Share on other sites

Yes I have had, but gave up :( at least for the time being. Also, I did not figure out how to "register" MySQLComparisonBundle.exe to utilize the advertized "free for non-commercial version".

Share this post

Link to post
Share on other sites

I think they send you an e-mail with a registration Key. I too was waiting for the e-mail but I think a customer rep sends it sans Redgate labelling which is why you might not see notice it in your inbox.

Edited by FrancisChung
Better clairity
  • Like 1

Share this post

Link to post
Share on other sites
1 hour ago, FrancisChung said:

I think they send you an e-mail with a registration Key.

I see, thanks. I wait patiently then :)

Share this post

Link to post
Share on other sites

Perhaps they handle this manually by a customer rep so you may need to wait until Monday / Tuesday.

Mine was a guy by the name of Jordan Miller and gmail treated it like a promo e-mail so it was nowhere to be seen initially.

  • Like 1

Share this post

Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By jds43
      Does anyone have experience with migrating content from Django to Processwire? Or are there any suggestions for achieving this?
    • By Brawlz
      I hope this is the correct section for my problem.
      All I need is a connection to an external Database and a query gettings some data. I do this in a processwire Page-Template. I am honestly not sure if it is a problem with processwire or my code:
      $host = ‚XXXXX’; $user = ‚XXXXX‘; $pass = ‚XXXXX‘; $db = ‚XXXXX‘; $port = ‚3306‘; $mydb = new Database($host, $user, $pass, $db , $port);  $result = $mydb->query("SELECT * FROM char“);  while($row = $result->fetch_assoc()) {  print_r($row);  }  
      Produces the following error:
      Error: Exception: DB connect error 2002 - Connection timed out (in /customers/9/4/e/XXXX.de/httpd.www/wire/core/Database.php line 79)
      I also tried connecting without the $port variable but got the same error.
    • By Mobiletrooper
      Hey Ryan, hey friends,
      we, Mobile Trooper a digital agency based in Germany, use ProcessWire for an Enterprise-grade Intranet publishing portal which is under heavy development for over 3 years now. Over the years not only the user base grew but also the platform in general. We introduced lots and lots of features thanks to ProcessWire's absurd flexibility. We came along many CMS (or CMFs for that matter) that don't even come close to ProcessWire. Closest we came across was Locomotive (Rails-based) and Pimcore (PHP based).
      So this is not your typical ProcessWire installation in terms of size.
      Currently we count:
      140 Templates (Some have 1 page, some have >6000 pages)
      313 Fields
      ~ 15k Users (For an intranet portal? That's heavy.)
      ~ 195 431 Pages (At least that's the current AUTOINCREMENT)
      I think we came to a point where ProcessWire isn't as scalable anymore as it used to be. Our latest research measured over 20 seconds of load time (the time PHP spent scambling the HTML together). That's unacceptable unfortunately. We've implemented common performance strategies like:
      We're running on fat machines (DB server has 32 gigs RAM, Prod Web server has 32gigs as well. Both are running on quadcores (xeons) hosted by Azure.
      We have load balancing in place, but still, a single server needs up to 20 sec to respond to a single request averaging at around about 12 sec.
      In our research we came across pages that sent over 1000 SQL queries with lots of JOINs. This is obviously needed because of PWs architecture (a field a table) but does this slow mySQL down much? For the start page we need to get somewhere around 60-80 pages, each page needs to be queried for ~12 fields to be displayed correctly, is this too much? There are many different fields involved like multiple Page-fields which hold tags, categories etc.
      We installed Profiler Pro but it does not seem to show us the real bottleneck, it just says that everything is kinda slow and sums up to the grand total we mentioned above.
      ProCache does not help us because every user is seeing something different, so we can cache some fragments but they usually measure at around 10ms. We can't spend time optimising if we can't expect an affordable benefit. Therefore we opted against ProCache and used our own module which generates these cache fragments lazily. 
      That speeds up the whole page rendering to ~7 sec, this is acceptable compared to 20sec but still ridiculously long.
      Our page consists of mainly dynamic parts changing every 2-5 minutes. It's different across multiple users based on their location, language and other preferences.
      We also have about 120 people working on the processwire backend the whole day concurrently.
      What do you guys think?
      Here are my questions, hopefully we can collect these in a wiki or something because I'm sure more and more people will hit that break sooner than they hoped they would:
      - Should we opt for optimising the database? Since >2k per request is a lot even for a mysql server, webserver cpu is basically idling at that time.
      - Do you think at this point it makes sense to use ProcessWire as a simple REST API?
      - In your experience, what fieldtypes are expensive? Page? RepeaterMatrix?
      - Ryan, what do you consider as the primary bottleneck of processwire?
      - Is the amount of fields too much? Would it be better if we would try to reuse fields as much as possible?
      - Is there an option to hook onto ProcessWires SQL builder? So we can write custom SQL for some selectors?
      Thanks and lots of wishes,
      Pascal from Mobile Trooper
    • By Sergio
      All of a sudden, with nothing changed on the database or server, a website was getting error when doing a search:
      Error: Exception: SQLSTATE[HY000]: General error: 23 Out of resources when opening file './your-database-name/pages_parents.MYD' (Errcode: 24 - Too many open files) (in /home/forge/example.com/public/wire/core/PageFinder.php line 413) #0 /home/forge/example.com/public/wire/core/Wire.php(386): ProcessWire\PageFinder->___find(Object(ProcessWire\Selectors), Array) #1 /home/forge/example.com/public/wire/core/WireHooks.php(723): ProcessWire\Wire->_callMethod('___find', Array) #2 /home/forge/example.com/public/wire/core/Wire.php(442): ProcessWire\WireHooks->runHooks(Object(ProcessWire\PageFinder), 'find', Array) #3 /home/forge/example.com/public/wire/core/PagesLoader.php(248): ProcessWire\Wire->__call('find', Array) #4 /home/forge/example.com/public/wire/core/Pages.php(232): ProcessWire\PagesLoader->find('title~=EAP, lim...', Array) #5 /home/forge/example.com/public/wire/core/Wire.php(383): ProcessWire\Pages->___find('title~=EAP, lim...') #6 /home/forge/example.com/public/wire This error message was shown because: you are logged in as a Superuser. Error has been logged.  
      I tried several things, listed in this thread: https://serverfault.com/questions/791729/ubuntu-16-04-server-mysql-open-file-limit-wont-go-higher-than-65536
      But for some reason, MySQL was not getting its limit increased, but in the end, the one that did the trick was this:
      This worked for me on Ubuntu Xenial 16.04:
      Create the dir /etc/systemd/system/mysql.service.d
      Put in /etc/systemd/system/mysql.service.d/override.conf:
      [Service] LimitNOFILE=1024000 Now execute
      systemctl daemon-reload systemctl restart mysql.service Yes indeed, LimitNOFILE=infinity actually seems to set it to 65536.
      You can validate the above after starting MySQL by doing:
      cat /proc/$(pgrep mysql)/limits | grep files
    • By FrancisChung
      Mastering PHP Design Patterns book from Packt Publishing is free for the next 22 hrs (as of time of posting)
  • Create New...