Linux Shell – Unescape an escaped JSON string

I have a process that gets nightly incremental SQL dumps as INSERT statements from a MySQL database and INSERTS them into a PostgreSQL database. The INSERT statements output from the MySQL Dump is an escaped string (the original string being a JSON). It looks something like this:

INSERT INTO t_usage VALUES (0,'Nu3agpAxIZTJZ3CARVpxZj6Cs5OimYIqaiPVzGJIVZ/0wIEjMN/e/u7CuTKMHptGqEkvboCIvbftG/Reo1TisxknmzvPsKdlAMnHSuTMOWx/a3OPpapCASQsmASOXcpjvl+xlsqAn5280Z7iptP8H1JWf1s5Du93ZlRoE2+XAIQ=','','{\"payload\": {\"0\": {\"sw_version\": \"kw2016.03.10-ng8\", \"users\": {\"admin_users\": 1, \"users_licensed\": 1, \"locked_users\": 0, \"total_user_profiles\": 4, \"recipient_users\": 0, \"users_need_passwd_reset\": 1, \"total_users\": 1, \"suspended_users\": 0, \"restricted_users\": 0, \"users_need_activation\": 0, \"users_not_licensed\": 0, \"\\\">_users\": 0, \"end_users\": 0, \"standard_users\": 2}, \"license\": {\"search_licensed\": true, \"2fa_licensed\": true, \"expiry\": \"2025-12-25\", \"package_id\": \"kiteworks Enterprise Connect\", \"dlp_licensed\": true, \"signature\": \"xyz=\", \"deployment_id\": \"000000\", \"customer_id\": \"000000\", \"licensed_users\": 500, \"anti_virus_licensed\": true, \"deployment_type\": \"On Premise\"}, \"storage\": {\"users_storage_limit\": \"2097152000\", \"users_storage_used\": \"0\", \"tenant_storage_used\": {}, \"tenant_storage_limit\": \"-1\"}, \"microsoft_reporting\": {\"number_of_users_who_opened_documents\": \"\", \"number_of_opens_office_online\": 0, \"is_explicitly_disabled_office_online\": 0, \"total_users_who_use_office_online_and_mobile\": 0, \"number_of_opens_office_desktop\": 0, \"is_elect_office_online\": false, \"number_of_total_opens\": 0, \"number_of_documents_opened\": \"\"}, \"bandwidth\": {\"downloads\": {}, \"bandwidth_limit\": -1, \"uploads\": {}}, \"service_hostname\": \"\", \"active\": 1, \"configuration\": {\"ec_server_count\": 0, \"ec_enabled\": false, \"branding_used\": false, \"migration_status\": \"\", \"dlp_enabled\": false, \"2fa_enabled\": \"\", \"config_max_file_version\": 10, \"ec_source_types\": {\"assigned_sources\": [], \"user_sources\": []}, \"migration_licensed\": [\"Bulk Migration Licensed\", \"Migration Assistant Licensed\"], \"migration_enabled\": \"Migration not enabled\", \"ldap_server_cnt\": 0, \"anti_virus_enabled\": false, \"location_count\": 0, \"total_folders\": 1, \"sys_roles_count\": {\"Web\": 1, \"Search\": 1, \"Anti_virus\": 0, \"Storage\": 1, \"EC\": 0, \"Application\": 1}, \"total_files\": 0, \"search_enabled\": true, \"ldap_enabled\": false, \"sso_enabled\": \"\", \"total_files_numerous_versions\": 0}, \"customer_name\": \"host\"}, \"start_time\": 1470873600, \"end_time\": 1470959999}, \"error\": 0}','2016-08-12 22:30:30',NULL);

So far, I used a sed command on the SQL dump and unescaped the single and double quotes.

sed -i "s/\\\'/\'\'/g" t_usage_incr.sql (Converting \' to a ")
sed -i 's/\\\"/\"/g' t_usage_incr.sql (Converting \" to a ")

But someone tried to do an SQL injection that led to backslashes in the data and my unescaping wouldn’t result in a valid JSON. So the part in the input

gets converted into

which is still an invalid JSON output. I had missed unescaping the backslash itself. The following sed sequence when used:

sed -i "s/\\\'/\'\'/g" t_usage_incr.sql
sed -i 's/\\\\/\\/g' t_usage_incr.sql
sed -i 's/\\\"/\"/g' t_usage_incr.sql

results in output


which is eventually a valid JSON because the double quote in the middle is now escaped properly.

Use select results as a table

You may sometimes want to query further on query results in order to make the querying logic simple. I’ve used views to do that in the past when the underlying query has to be reused over and over again. But for temporary purposes, you could use the SELECT query results as a table and query over it.

I wanted to filter results from a table and then group the results further. I filtered the data using a select statement and used it as a table for the outer query for grouping.

SELECT t.server, count(*), report_date
(SELECT * FROM kw_server_raw
   WHERE data LIKE '%microsoft_reporting%'
   AND report_date < '8/5/2016'
   ORDER BY id, report_date) t
GROUP BY t.server, report_date;

Django Form – Selectively hide fields

The model forms has an option to specify an array exclude with fields that you’d like to hide on the form. But for a regular form, it’s much easier. Here’s how it can be implemented in a form.

class ExtendLicenseForm(forms.Form):
    extension_days = forms.IntegerField(required=True, help_text='License can be extended for a maximum of 30 days.')
    # Below is the field that I'd like to show in the form if the kwargs passed in value is True
    upload_tenant_license = forms.BooleanField(required=False, label='Upload license', help_text='Select option if you would like to automatically upload the extended license to the tenant.')
    def __init__(self, *args, **kwargs):
        upload_tenant_lic_opt = kwargs.pop('upload_tenant_license') if 'upload_tenant_license' in kwargs else None
        super(ExtendLicenseForm, self).__init__(*args, **kwargs)
        if not upload_tenant_lic_opt:
            # Hide the form field if the flag is False
            self.fields['upload_tenant_license'].widget = forms.HiddenInput()


Postgres – Update table data by replacing words

While working with some email templates, I made a typo. Now, I had a few options to choose from. Open each template from the UI and make the change. The other one was use a SQL editor and create update commands, modify the data in the query and execute. But either of those would be quite laborious. Then I searched for something that could do an update with a string replace across all fields in a table and Voila, found it. Here’s the solution:

UPDATE commander_commonemailtemplate 
SET email_body = replace(email_body, 'file from the links at the bottom', 'file from the link at the bottom');

As you noticed, links was supposed to be replaced with link. But the reason I chose a bigger line was so that if if there are other rightful occurrences of the word links, they won’t get replaced.

Django – Display hyperlinks in templates

I had a couple of challenges here apart from just displaying hyperlinks.

1. Displaying the hyperlinks
2. Filter the list of display data to show hyperlink for first item and not for rest

<table class="sortable">
    {% for column in report_header %}
    {% endfor %}
  {% for row in report_body %}
    <tr class="{% cycle "row1" "row2" %}">
      {% for column in row %}
        {% if forloop.counter == 1 %}
          {% with row.0|urlencode as encoded_name %}
            <td><a href="#" onclick="'{%url showAccountPage encoded_name%}', '_blank');">{{row.0}}</a></td>
          {% endwith %}
        {% else %}
        {% endif %}
      {% endfor %}
  {% endfor %}

PHP Warning – date(): It is not safe to rely on the system’s timezone settings

Ever encountered this warning while executing PHP code?

PHP Warning:  date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting 
or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely 
misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. 
in/path/file.php on line 10

Solution – explicitly add a timezone to the php.ini file

date.timezone = "America/Los_Angeles"

A list of timezones is available here –

CentOS 7 – Install PHP and Postgres

First time using CentOS. Thought it will be useful to list down the steps to install PHP and Postgres and get it to work together.

0. Always recommend updating

yum update
# update will update every currently installed package

1. Install PHP

yum install php
# Validate installation and version by executing php --info

2. Install PostgreSQL

yum install postgresql-server postgresql-contrib 

3. Install php-pgsql connector

yum install php-pgsql

4. Now start and enable postgres

service postgresql initdb
systemctl start postgresql
systemctl enable postgresql
service postgresql restart

5. Modify the pg_hba.conf file located under /var/lib/pgsql/data/. Find the lines that looks like this

host    all             all               ident
host    all             all             ::1/128                 ident

Then replace “ident” with “md5”, so they look like this:

host    all             all               md5
host    all             all             ::1/128                 md5

6. Create local user and set user password

adduser automation
sudo su
passwd automation

7. Login as postgres and connect to database. Create user role (with same password as shell) and new database

su postgres
>CREATE ROLE automation WITH PASSWORD 'UserPassword';

The last one was important because I wasn’t able to connect to the database. But not sure if this solved the problem because GRANT ALL PRIVILEGES should have taken care of it. Needs to be researched further.

8. Create a test connection script in PHP to validate database connection

//  Configure DB Parameters
$host = "localhost";
$dbname = "masterdb";
$dbuser = "automation";
$userpass = "UserPassword";

$con = pg_connect("host=$host

if (!$con) {
        die('Could not connect');
else {
        echo ("Connected to local DB");