Install/Uninstall PostgreSQL on a Mac

Use the installer from Enterprise DB website to install on a mac. There are installers available from BigSQL but they aren’t quite standard so uninstallation could be a little more work. Here are instructions to install and uninstall PostgreSQL on/from your Mac.

Install postgres
1. Download the installer from the following link and click through the installation process

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads#macosx

2. Follow instructions to execute psql from commandline on Mac:

https://varunver.wordpress.com/2016/09/26/executing-psql-from-command-prompt-on-mac-osx-after-installing-postgresql/

Uninstall process
1. If you used the above link to install postgres on your system, it would have been installed under /Library/PostgreSQL. Navigate to the following directory from your terminal as root (The version 9.6 may differ based on the version that was installed on your system):

cd /Library/PostgreSQL/9.6/

2. Open the uninstaller and follow instructions to remove the application

open uninstall-postgresql.app

3. Remove the PostgreSQL and data folders. The Wizard will notify you that these were not removed.

rm -rf /Library/PostgreSQL

4. Remove the configuration file that keeps track of the database version you are using:

rm /etc/postgres-reg.ini

5. Remove the PostgreSQL user using System Preferences -> Users & Groups.

That should be all! The uninstall wizard would have removed all icons and start-up applications files so you don’t have to worry about those.

Advertisements

Run queries like count, distinct, etc from a list of CSV, Excel column on a DB

Someone recently sent me an Excel with a set of data and claimed that one of the rows that stored the ID’s (unique) had set of duplicates. One way to read this data in an excel is to sort and scroll through to see if you can find any duplicates. The other way that I found out to be cooler and easier for a developer was to get the data as a view in a SQL editor and run queries on top of it. Here are the steps to use the dataset in SQL

1. Get the Raw data that you need to query upon, e.g. 10 rows below for simplicity

7
8
9
12
15
16
17
19
21
23

2. Use your favorite text editor to convert the data to something like this:

SELECT 7
UNION SELECT 8
UNION SELECT 9
UNION SELECT 12
UNION SELECT 15
UNION SELECT 16
UNION SELECT 17
UNION SELECT 19
UNION SELECT 21

3. Now convert this to a sub-query that can be used in SQL to query the data

SELECT depid FROM
(SELECT 7
UNION SELECT 8
UNION SELECT 9
UNION SELECT 12
UNION SELECT 15
UNION SELECT 16
UNION SELECT 17
UNION SELECT 19
UNION SELECT 21) deployments(depid)

4. Now in order to get the duplicates, all I had to do was to get a count for all records and get a count for the duplicate records. If that didn’t match, I could have done grouping and got the defaulters

SELECT DISTINCT depid FROM
(SELECT 7
UNION SELECT 8
UNION SELECT 9
UNION SELECT 12
UNION SELECT 15
UNION SELECT 16
UNION SELECT 17
UNION SELECT 19
UNION SELECT 21) deployments(depid)

Restart postgres on Production machines without /etc/init.d/postgresql restart

If you want to avoid having to cycle the DB (i.e.: /etc/init.d/postgres restart), you can do this on the command line (as root):

kill -HUP `pidof postgres`

Once you do that, you will notice a message like this in the /var/log/postgresql/postgresql-9.4-main.log:

“2017-01-19 15:33:11 PST [24237-2] LOG: received SIGHUP, reloading configuration files”

Postgres receives the “hang up” (HUP) signal from the kill command and it knows to re-read the configuration files without taking the database down – ZERO outage.

Executing psql from command prompt on Mac OSX after installing PostgreSQL

I installed PostgreSQL but wasn’t able to execute psql on the command prompt to execute queries. I had to add the PostgreSQL path to the PATH variable in order to do that. It’s best to append the path in the bash_profile so it’s ready and available every time the shell loads. Add the following lines (depending on the location of your PostgreSQL version)

# Modify PATH
export PATH=$PATH:/Library/PostgreSQL/9.5/bin

Linux Shell – Unescape an escaped JSON string

I have a process that gets nightly incremental SQL dumps as INSERT statements from a MySQL database and INSERTS them into a PostgreSQL database. The INSERT statements output from the MySQL Dump is an escaped string (the original string being a JSON). It looks something like this:

INSERT INTO t_usage VALUES (0,'Nu3agpAxIZTJZ3CARVpxZj6Cs5OimYIqaiPVzGJIVZ/0wIEjMN/e/u7CuTKMHptGqEkvboCIvbftG/Reo1TisxknmzvPsKdlAMnHSuTMOWx/a3OPpapCASQsmASOXcpjvl+xlsqAn5280Z7iptP8H1JWf1s5Du93ZlRoE2+XAIQ=','192.168.97.168','{\"payload\": {\"0\": {\"sw_version\": \"kw2016.03.10-ng8\", \"users\": {\"admin_users\": 1, \"users_licensed\": 1, \"locked_users\": 0, \"total_user_profiles\": 4, \"recipient_users\": 0, \"users_need_passwd_reset\": 1, \"total_users\": 1, \"suspended_users\": 0, \"restricted_users\": 0, \"users_need_activation\": 0, \"users_not_licensed\": 0, \"\\\">_users\": 0, \"end_users\": 0, \"standard_users\": 2}, \"license\": {\"search_licensed\": true, \"2fa_licensed\": true, \"expiry\": \"2025-12-25\", \"package_id\": \"kiteworks Enterprise Connect\", \"dlp_licensed\": true, \"signature\": \"xyz=\", \"deployment_id\": \"000000\", \"customer_id\": \"000000\", \"licensed_users\": 500, \"anti_virus_licensed\": true, \"deployment_type\": \"On Premise\"}, \"storage\": {\"users_storage_limit\": \"2097152000\", \"users_storage_used\": \"0\", \"tenant_storage_used\": {}, \"tenant_storage_limit\": \"-1\"}, \"microsoft_reporting\": {\"number_of_users_who_opened_documents\": \"\", \"number_of_opens_office_online\": 0, \"is_explicitly_disabled_office_online\": 0, \"total_users_who_use_office_online_and_mobile\": 0, \"number_of_opens_office_desktop\": 0, \"is_elect_office_online\": false, \"number_of_total_opens\": 0, \"number_of_documents_opened\": \"\"}, \"bandwidth\": {\"downloads\": {}, \"bandwidth_limit\": -1, \"uploads\": {}}, \"service_hostname\": \"pa.dev\", \"active\": 1, \"configuration\": {\"ec_server_count\": 0, \"ec_enabled\": false, \"branding_used\": false, \"migration_status\": \"\", \"dlp_enabled\": false, \"2fa_enabled\": \"\", \"config_max_file_version\": 10, \"ec_source_types\": {\"assigned_sources\": [], \"user_sources\": []}, \"migration_licensed\": [\"Bulk Migration Licensed\", \"Migration Assistant Licensed\"], \"migration_enabled\": \"Migration not enabled\", \"ldap_server_cnt\": 0, \"anti_virus_enabled\": false, \"location_count\": 0, \"total_folders\": 1, \"sys_roles_count\": {\"Web\": 1, \"Search\": 1, \"Anti_virus\": 0, \"Storage\": 1, \"EC\": 0, \"Application\": 1}, \"total_files\": 0, \"search_enabled\": true, \"ldap_enabled\": false, \"sso_enabled\": \"\", \"total_files_numerous_versions\": 0}, \"customer_name\": \"host\"}, \"start_time\": 1470873600, \"end_time\": 1470959999}, \"error\": 0}','2016-08-12 22:30:30',NULL);

So far, I used a sed command on the SQL dump and unescaped the single and double quotes.

sed -i "s/\\\'/\'\'/g" t_usage_incr.sql (Converting \' to a ")
sed -i 's/\\\"/\"/g' t_usage_incr.sql (Converting \" to a ")

But someone tried to do an SQL injection that led to backslashes in the data and my unescaping wouldn’t result in a valid JSON. So the part in the input

\"\\\">_users\"
gets converted into
"\\">_users"

which is still an invalid JSON output. I had missed unescaping the backslash itself. The following sed sequence when used:

sed -i "s/\\\'/\'\'/g" t_usage_incr.sql
sed -i 's/\\\\/\\/g' t_usage_incr.sql
sed -i 's/\\\"/\"/g' t_usage_incr.sql

results in output

"\">_users"

which is eventually a valid JSON because the double quote in the middle is now escaped properly.

Use select results as a table

You may sometimes want to query further on query results in order to make the querying logic simple. I’ve used views to do that in the past when the underlying query has to be reused over and over again. But for temporary purposes, you could use the SELECT query results as a table and query over it.

e.g.
I wanted to filter results from a table and then group the results further. I filtered the data using a select statement and used it as a table for the outer query for grouping.

SELECT t.server, count(*), report_date
FROM 
(SELECT * FROM kw_server_raw
   WHERE data LIKE '%microsoft_reporting%'
   AND report_date < '8/5/2016'
   ORDER BY id, report_date) t
GROUP BY t.server, report_date;

Postgres – Update table data by replacing words

While working with some email templates, I made a typo. Now, I had a few options to choose from. Open each template from the UI and make the change. The other one was use a SQL editor and create update commands, modify the data in the query and execute. But either of those would be quite laborious. Then I searched for something that could do an update with a string replace across all fields in a table and Voila, found it. Here’s the solution:

UPDATE commander_commonemailtemplate 
SET email_body = replace(email_body, 'file from the links at the bottom', 'file from the link at the bottom');

As you noticed, links was supposed to be replaced with link. But the reason I chose a bigger line was so that if if there are other rightful occurrences of the word links, they won’t get replaced.