AWS: Python Kinesis Streams

This entry is part 2 of 3 in the series AWS & Python

If you haven’t already done so please refer to the AWS setup section which is part of this series. As time goes on I will continually update this section.

To put something on the Kinesis Stream you need to utilise the “connection_kinesis” you setup already in the previous tutorial on setting up the connection. You will need to set the partition key, data and stream.

  1. response = connection_kinesis.put_record(StreamName=##KINESIS_STREAM##, Data=##DATA##, PartitionKey=##FILE_NAME##)

Depending on your data you may need to utf8 encode. For example below.

  1. bytearray(##MY_DATA##, 'utf8')

To read from the kinesis stream you need to setup the shard iterator then retrieve the data from the stream. Not forgetting to grab the new shard iterator from the returned records. Remember to not query against the queue to fast.

  1. #shardId-000000000000 is the format of the stream
  2. shard_it = connection_kinesis.get_shard_iterator(StreamName=##KINESIS_STREAM##, ShardId='shardId-000000000000', ShardIteratorType='LATEST')["ShardIterator"]
  3.  
  4. recs = connection_kinesis.get_records(ShardIterator=shard_it, Limit=1)
  5.  
  6. #This is the new shard iterator returned after queueing the data from the stream.
  7. shard_it = out["NextShardIterator"]
  8.  

 

AWS: Python Setup

This entry is part 1 of 3 in the series AWS & Python

When you want to work with S3 or a Kinesis Stream we first need to setup the connection. At the time of this writing I am using boto3 version 1.3.1.

Next we need to import the package.

  1. import boto3

Next we setup the session and specify what profile we will be using.

  1. profile = boto3.session.Session(profile_name='prod')

The profile name comes from the “credentials” file. You can set the environment variable “AWS_SHARED_CREDENTIALS_FILE” to specify what credentials file to use. You can setup the credentials file like below. You can change the “local” to anything you want. I normally use “stage”, “dev” or “prod”.

  1. [local]
  2. aws_access_key_id=##KEY_ID##
  3. aws_secret_access_key=##SECRET_ACCESS_KEY##
  4. region=##REGION##

Next we need to setup the connection to S3. To do this we will need to use the profile we created above.

  1. connection_s3 = profile.resource('s3')

If we want to also use a Kinesis stream then we need to setup the connection. To do this we will need the profile we created above.

  1. connection_kinesis = profile.client('kinesis')

Python: Enums

There are a variety of different ways to create enums. I show you a few different ways. They are not full version but you get the idea. If you have another way please feel free to add.

Option 1:

  1. def enum(**enums):
  2. return type('Enum', (), enums)
  3.  
  4. my_enum = enum(NONE=0, SOMEVAL=1, SOMEOTHERVAL=2)

Option 2:
You will notice that you pass in “Enum” into the class. Also this way you can also declare classmethods for getting enum from string or tostring. It’s really your choice how you get the string representation you could either use str(MyEnum.VALUE) or MyEnum.tostring()

  1. from enum import Enum
  2.  
  3. class MyEnum(Enum):
  4. VALUE = 0
  5.  
  6. def __str__(self):
  7. if self.value == MyEnum.VALUE:
  8. return 'Value'
  9. else:
  10. return 'Unknown ({})'.format(self.value)
  11. def __eq__(self,y):
  12. return self.value==y
  13.  
  14. @classmethod
  15. def fromstring(cls, value):
  16. """
  17. Converts string to enum
  18. """
  19.  
  20. return getattr(cls, value.upper(), None)
  21.  
  22. @classmethod
  23. def tostring(cls, val):
  24. """
  25. Converts enum to string
  26. """
  27.  
  28. for k, v in vars(cls).iteritems():
  29. if v == val:
  30. return k

Option 3:

  1. class MyEnum():
  2. NONE = 1
  3. VALUE = 2
  4. def __init__(self, Type):
  5. if Type is None:
  6. self.value = MyEnum.VALUE
  7. else:
  8. self.value = int(Type)
  9. def __str__(self):
  10. if self.value == MyEnum.VALUE:
  11. return 'Value'
  12. else:
  13. return 'None'
  14. def __eq__(self,y):
  15. return self.value==y

Python: For Loops

There are a variety of different ways to write a for loop in python. See below for different options. If you have any other suggestions please feel to add a comment.

Loop through a list one record at a time.

  1. for rec in data:
  2. #Do Something

Loop through a range of numbers up to 100.

  1. for i in range(100)
  2. #Do Something

Enumerate a list getting the index and value.

  1. for idx, value in enumerate(data):
  2. #Do Something

Inline for loop.

  1. d = [ x for x in data]

Inline for loop with an if condition.

  1. d = [ x for x in data if x==1]

Java: Connect to Postgres

Below are the steps to setup a connection to a Postgres DB and some other options that you can use.

Pom.xml:

  1. <dependency>
  2. <groupId>org.postgresql</groupId>
  3. <artifactId>postgresql</artifactId>
  4. <version>9.4.1208.jre7</version>
  5. </dependency>

Import:

  1. import java.sql.Connection;
  2. import java.sql.DriverManager;
  3. import java.sql.PreparedStatement;
  4. import java.sql.ResultSet;
  5. import java.sql.SQLException;

Build The Connection:

  1. Class.forName("org.postgresql.Driver");
  2. Connection connection = DriverManager.getConnection(##URL##, ##USER##, ##PASS##);

Preparing the Query:
We utilise “PreparedStatement” to setup the connection. Which will allow us to use parameters in the query.

  1. PreparedStatement ps = connection.prepareStatement("select id from table where column=?")

If your query had parameters (IE: ?) then you will need to pass in the value for each parameter. If you notice there is ##POSITION## and ##VALUE##. Position is the location of where the parameter appears in the query. You can set various types of data for example integer, json, etc.

  1. ps.setString(##POSITION##, ##VALUE##);

After we perform a query we need to retrieve the data. We use “ResultSet” for that. Depending on how we return the data depends on how we get the data after the query succeeds. Below are examples of one line returned vs multiple rows returned.

  1. ResultSet rs = ps.executeQuery();
  2.  
  3. if (rs.next()) {
  4. int variable = rs.getInt("id");
  5. }
  6.  
  7. while (rs.next()) {
  8. //Do Something
  9. }

Insert:
If you want to perform an insert you don’t need to do “executeQuery” you can call just “execute”.

  1. ps = connection.prepareStatement("insert into mytable (column) values (?)");
  2. ps.setInt(##POSITION##, ##VALUE##);
  3. ps.execute();

Batching:
Sometimes we have a lot of updates or inserts to perform. We should batch that.

  1. //You setup the preparedStatement and then add to the batch.
  2. ps.addBatch();
  3.  
  4. //You can set a max batch size to send the batch once it hits that amount
  5. if (++count % batchSize == 0) {
  6. ps.executeBatch();
  7. }

Passing Json as a Parameter:
We create the postgres object. Set the type to json and the value as a string in json format. Next we set the preparedStatement parameter as the postgres object that we just created.

  1. final PGobject jsonObject = new PGobject();
  2. jsonObject.setType("json");
  3. jsonObject.setValue(value.toString());
  4.  
  5. //Now set the json object into the preparedStatement
  6. ps.setObject(##POSITION##, jsonObject);

Various Commands

Below are some useful commands for those either just learning Ubuntu 14.04 or a resource for various commands. I will continually expand on them.

Directory Exists:
  1. if [ ! -d "##DIRECTORY##" ]; then
  2. fi
Format a Number as two digits even if it is 1:
  1. $(printf %02d $variable)
Increment a Number:
  1. variable=`expr $variable + 1`
Create a new file:
  1. touch ##FILENAME##
Check if a file contains a string:

This will return 0 if it is not found.

  1. $(echo `grep -c '##TextToSearch##' ##FileNameToSearch##`)
Create a link to a file:
  1. ln -s ##OriginalFile## ##LinkedFile##
List all packages installed:
  1. dpkg -l
Remove installed package:
  1. sudo apt-get --purge remove ##PACKAGENAME##
Install Package:
  1. sudo apt-get install ##PACKAGENAME##
Package Exists:

This will return 0 if it is not found.

  1. $(echo `dpkg-query -l | grep -c ##PACKAGE_NAME##`)
Python Package Exists:

This will return 0 if it is not found.

  1. $(echo `pip freeze | grep -c ##PACKAGE_NAME##`)
Get IP Address:
  1. ip addr show
Find a file:
  1. find / -type f -name "##FILENAME##"
Restart Service:
  1. /etc/init.d/postgresql restart
  2. invoke-rc.d postgresql restart
Kill a process:
  1. kill ##PID##
Terminal Loop:

In the terminal let’s say you want to loop and display a value. You can do it like below. I am just printing free memory. But really you can do anything.

  1. while true; do free -m | grep /+ | gawk '{ print $4 }'; sleep 2; done
Switch to root:
  1. sudo su root
Add Text to File:
  1. echo "##TextToAdd##" >> ##FileNameToAddTo##
Add Text Above Other Text:
  1. sed -i 's/^##LookForText##/##TextToAdd##\n&/' ##FileNameToAddTo##
Loop:
  1. until [ $Variable -gt 3 ]
  2. do
  3. done

Postgres: Restore

Restoring a postgres instance is rather straight forward. If you have a better way please feel free to let me know.

  1. pg_restore -c --if-exists -Fd -h ##SERVERNAME## -p ##SERVERPORT## -U ##USER## -d ##DB_NAME## -v ##FILENAME_W_PATH## > ##OUTPUT_DIR## 2>&1
  1. -Fc: Instructs how the data was compressed for restoring.
  2. -Fd: Instructs that the data was compressed by directory for restoring.
  3. –if-exists: Used with -c

Postgres: Backup

Backing up a postgres instance is rather straight forward. You can automate the backup or manual backup. I have documented either or below. If you have a better way please feel free to let me know.

Manual:

  1. pg_dump -h ##SERVERNAME## -p ##SERVERPORT## -U ##USER## -j 8 -c --if-exists -Fd -b -v -f ##FILENAME_W_PATH## ##DB_NAME## > ##OUTPUT_LOG## 2>&1

There are a variety of options you can choose from but the above is a standard I like to use.

  1. -Fc: This compresses the data and allows for restoring via pg_restore
  2. -Fd: This compresses the data to a directory structure for restoring via pg_restore
  3. -j 8: This runs 8 concurrent jobs.
  4. -N mySchema: This excludes the schema mySchema from being backed up.
  5. –if-exists: Used with -c

Automated:

You can also use PgAgent to do scheduled backups and I recommend this however I don’t have the steps yet. That will hopefully come shortly.

You should create a backup user for the automated version. Here is what the user should have.

  1. CREATE USER ##BACKUP_USER## WITH ENCRYPTED PASSWORD '##PASSWORD##';
  2. GRANT CONNECT ON DATABASE ##DATABASE NAME## TO ##BACKUP_USER##;
  3. GRANT USAGE ON SCHEMA public TO ##BACKUP_USER##;
  4. GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO ##BACKUP_USER##;
  5. GRANT SELECT ON ALL TABLES IN SCHEMA public TO ##BACKUP_USER##;

Whichever OS you are using there is a different way to set it up.

Windows:
  1. You need to create “pgpass.conf” file in “%APPDATA%\postgresql” directory.
  2. pgpass should contain connection information such as “hostname:port:database:username:password”
  3. You can then create a batch file to run pg_dump. Example of such a batch file below. You could also write a PS script which is the preferred direction to go.
  1. @echo off
  2. for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
  3. set day=%%i
  4. set month=%%j
  5. set year=%%k
  6. )
  7.  
  8. for /f "tokens=1-4 delims=: " %%i in ("%time%") do (
  9. set hour=%%i
  10. set minute=%%j
  11. set second=%%k
  12. )
  13.  
  14. set backupDir=\\NETWORK_SHARE\%month%_%day%_%year%
  15. set backupFile=DBName_%month%_%day%_%year%_%hour%_%minute%_%second%.backup
  16.  
  17. if not exist %backupDir% (mkdir %backupDir%)
  18.  
  19. C:\"Program Files"\PostgreSQL\9.4\bin\pg_dump.exe -h ##SERVERNAME## -p ##SERVERPORT## -U ##BACKUP_USER## -j 8 -c --if-exists -Fd -b -v -f %backupDir%\%backupFile% ##DB_NAME## > ##OUTPUT_LOG## 2>&1
Ubuntu:

You need to create the .pgpass file in the users home directory. It should contain information such as hostname:port:database:username:password. You will also need to modify the permissions to be owner read/write.

  1. touch .pgpass
  2. chmod 600 .pgpass

Next we need to set the crontab job. By entering “crontab -e”. You can call the backup.sh file on whatever schedule you want.

  1. #! /bin/sh
  2. currentDate=$(date +"%m_%d_%Y")
  3. currentDateTime=$currentDate$(date +"_%H_%M_%S")
  4.  
  5. if [ ! -d ##NETWORK_SHARE##/##DB_NAME## ]
  6. then
  7. mkdir ##NETWORK_SHARE##/##DB_NAME##
  8. fi
  9.  
  10. if [ ! -d ##NETWORK_SHARE##/##DB_NAME##/$currentDate ]
  11. then
  12. echo $currentDate
  13. mkdir ##NETWORK_SHARE##/##DB_NAME##/$currentDate
  14. fi
  15.  
  16. pg_dump -h ##SERVERNAME## -p ##SERVERPORT## -U ##BACKUP_USER## -j 8 -c --if-exists -Fd -b -v -f ##NETWORK_SHARE##/##DB_NAME##/$currentDate/$currentDateTime.backup ##DB_NAME## > ##OUTPUT_LOG## 2>&1

HighCharts: Basic Graphing

This entry is part 1 of 2 in the series React: Highcharts

HighCharts is a pretty cool graphing package. Below is an example of how you can create an universal React class for a HighCharts graph.

You will need to install the package. At the time of this writing I am using 5.0.6.

You will also need to import HighCharts and require the charts.

  1. import Highcharts from "highcharts/highcharts.js";
  2. window.Highcharts = Highcharts;
  3. require("highcharts/highcharts-more.js")(Highcharts);

In the state I hold these values to manage how the chart loads and displays data.

  1. getInitialState: function() {
  2. return {
  3. chartSettings: null, //Holds the chart settings data
  4. loaded: false, //Determines if the chart has been loaded
  5. chart: null, //The chart
  6. data: [], //The data to utilize for the chart. It's most likely in series format
  7. };
  8. },

In the component methods check to see when the class has been loaded with data or reset if needed.

  1. componentDidUpdate: function() {
  2. if (!this.state.loaded) { //The chart hasn't been loaded with data so load it and refresh the chart
  3. this.setState({
  4. loaded: true,
  5. data: this.props.data
  6. }, () => { this.chart(); });
  7. }
  8. },
  9. componentWillReceiveProps: function(newprops) {
  10. if (this.state.loaded && this.props != newprops) { //The chart has been loaded but the data has changed. Refresh the chart after
  11. this.setState({
  12. data: newprops.data
  13. }, () => { this.chart(); });
  14. }
  15. },

The class the render method is how the chart assigns to the UI.

  1. render: function() {
  2. return (<div id={this.props.id}></div>
  3. ); },

You can create a “chart” method. Which you can use to manage the display of the chart. The main section of it is how to display the chart after you have modified the chart settings. You could also utilize a props for controlling whether to show the loading or not. Totally up to you.

  1. this.setState({
  2. loaded: true, //The data and chart has been loaded
  3. chart: new Highcharts.Chart(chartSettings) //Set the chart
  4. }, () => {
  5. if (!this.props.data.length == 0) { //If no data is present yet then show a loading image
  6. this.state.chart.showLoading();
  7. this.state.chart.redraw();
  8. } else { //The data has been loaded.
  9. this.state.chart.hideLoading();
  10. this.state.chart.redraw();
  11. }
  12. });

In the “chart” method you should clean up your existing chart before generating a new one.

  1. if (this.state.chart !== null) {
  2. this.state.chart.destroy();
  3. this.state.chart = null;
  4. }

There are so many ways of controlling the chartsettings. I will try to cover a vast majority of the options. The basic definition looks like this.

  1. chartSettings = $.extend(true, {},
  2. this.props.chartSettings,
  3. {
  4. chart: {
  5. renderTo: this.props.id, //The id you passed into the class
  6. backgroundColor: "",
  7. type: this.props.chart_type, //By passing in the chart type it will be open to various types of charts.
  8. height: 500, //You can specify the height of the graph if you want.
  9. zoomType: "xy", //If you want to be able to zoom.
  10. },
  11. credits: {
  12. enabled: false //Turns off the powered by
  13. },
  14. title: {
  15. text: this.props.title,
  16. style: { color: "white" }
  17. },
  18. subtitle: {
  19. text: this.props.sub_title
  20. },
  21. tooltip: {
  22. },
  23. plotOptions: {
  24. },
  25. series: thisInstance.state.data
  26. });

Tooltip has various options. One I like to use is the “formatter” functionality. This will allow you to modify what is displayed on hover.

  1. tooltip: {
  2. formatter: function(){
  3. var pointIndex = this.point.index;
  4. var seriesName = this.point.series.name;
  5. }
  6. }

There is also xAxis option. You can do a variety of different things. Depending on how you create your graph determines what options you should use. The type in xAxis can have a few different options. I show you “datetime” below. But you can also choose “linear” which is numerical data as well as “category” which allows you to put string data on the X axis.

  1. xAxis: {
  2. type: "datetime",
  3. categories: this.props.categories,
  4. title: {
  5. enabled: true,
  6. },
  7. showLastLabel: true,
  8. showFirstLabel: true,
  9. tickInterval: 15, //I chose to increment to x value by 15 days. But you can choose whatever you want
  10. labels: {
  11. formatter: function () {
  12. if (the type is a date == "date") {
  13. return Highcharts.dateFormat("%m/%d", this.value); //You can format however you like
  14. } else {
  15. return this.value;
  16. }
  17. }
  18. }
  19. },

There is also yAxis option. You can do a variety of different things. Depending on how you create your graph determines what options you should use. Here is an example.

  1. yAxis: {
  2. allowDecimals: true,
  3. title: {
  4. align: "high"
  5. },
  6. labels: {
  7. overflow: "justify",
  8. formatter: function() {
  9. return this.value;
  10. }
  11. },
  12. },

You can add onClick events to series points if you want.

  1. plotOptions: {
  2. series: {
  3. point: {
  4. events: {
  5. click: function(e){
  6. }
  7. }
  8. }
  9. }
  10. }

There are various graph types. For example “pie”, “bar”, “scatter”, etc. Here are a few different examples of a basic setup.

  1. plotOptions: {
  2. pie: {
  3. allowPointSelect: true, //When you click the pie slice it moves out slightly
  4. cursor: "pointer",
  5. shadow: false,
  6. dataLabels: {
  7. enabled: true,
  8. formatter:function(){
  9. },
  10. color: "white",
  11. style: {
  12. textShadow: false
  13. }
  14. }
  15. },
  16. bar: {
  17. dataLabels: {
  18. enabled: true,
  19. allowOverlap: true, //Labels will overlap. Turns this off if you don't want your labels to overlap.
  20. }
  21. },
  22. scatter: {
  23. dataLabels: {
  24. crop: false, //Labels will not be hidden
  25. },
  26. marker: {
  27. radius: 3,
  28. states: {
  29. hover: {
  30. enabled: true
  31. }
  32. }
  33. },
  34. states: {
  35. hover: {
  36. marker: {
  37. enabled: false
  38. }
  39. }
  40. }
  41. }
  42. }

Highcharts: Add Custom Buttons

This entry is part 2 of 2 in the series React: Highcharts

If you’ve never used HighCharts for your graphing needs I highly suggest it. Very customizable and easy to use.

You will need to require the exporting requirements.

  1. import Highcharts from "highcharts/highcharts.js";
  2. window.Highcharts = Highcharts;
  3. require("highcharts/modules/exporting")(Highcharts);

If you would like to add a custom button to your graph you can use the exporting section like below.

  1. exporting: {
  2. buttons: {
  3. customButton: {
  4. text: "Button Text",
  5. onclick: function () {
  6. }
  7. },
  8. },
  9. }

React: Export Module

Sometimes we need exportable modules for use through our applications. It is pretty straight forward to export a module.

NPM Installs:

  1. npm install create-react-class --save
  2. npm install prop-types --save
  3. npm install react --save
  4. npm install react-dom --save

Export:

  1. module.exports = {exportedModuleName:ModuleName};

exportedModuleName is the name that you use in other pages.
ModuleName is the name of the module to export.

The module will look something like this.
This one is just a TD module. But really you can do anything you want.

  1. window.jQuery = window.$ = require("jquery");
  2. import React from "react";
  3. import ReactDOM from "react-dom";
  4. var createReactClass = require('create-react-class');
  5. var propTypes = require('prop-types');
  6.  
  7. var MyExportableModule = createReactClass({
  8.       render: function() {
  9.             return React.createElement("anyelement", {className: this.props.className, key: this.props.name}, this.props.fields);
  10.       }
  11. });
  12.  
  13. MyExportableModule.PropTypes = {
  14.       name: React.PropTypes.string.isRequired,
  15.       fields: React.PropTypes.array.isRequired,
  16.       className: React.PropTypes.string
  17. };

 

 

 

Distinct Records in Object Array

Sometimes you need to determine the distinct objects in an array or distinct values in array. There are so many ways to do this. One way which I have used at times can be a bit slow depending on the size of your array.
From my investigation there is a lodash version that is much better. Once I do some testing I will update this but for now here is an example.
I expanded on the idea from Stack Exchange.

  1. var distinct = function(objectArray, param){
  2.       var distinctResult = [];
  3.  
  4.       $.each(objectArray, function(i, currentObject){
  5.             if (param !== null) {
  6.                   if (distinctResult.filter(function(v) { return v[param] == currentObject[param]; }).length == 0)
  7.                   {
  8.                         distinctResult.push(currentObject);
  9.                   }
  10.             } else {
  11.                   if(!exists(distinctResult, currentObject))
  12. {
  13.       distinctResult.push(currentObject);
  14. }
  15.             }
  16.       });
  17.  
  18.       return distinctResult;
  19. };
  20.  
  21. var exists = function(arr, object){
  22. var compareToJson = JSON.stringify(object);
  23. var result = false;
  24. $.each(arr, function(i, existingObject){
  25. if(JSON.stringify(existingObject) === compareToJson) {
  26. result = true;
  27. return false; // break
  28. }
  29. });
  30.  
  31. return result;
  32. };

C#: Connect to WebService

If you need to connect to a webservice method from your C# application you can do as an example like the following below. Notice I also use Newtonsoft for handling the json return. But you can do it however you want depending on your needs. This is just one way of doing it.

  1. using Newtonsoft.Json;
  2. using Newtonsoft.Json.Linq;
  3. using System.Net.Http;
  4.  
  5. string webserviceURL = "http://whatevermyurlis.com/";
  6. string data = String.Format("my_id={0}", myId);
  7. string url = String.Format("{0}MyMethodName?{1}", webserviceURL, data);
  8. System.Net.Http.HttpClient client = new System.Net.Http.HttpClient();
  9. client.BaseAddress = new System.Uri(url);
  10. client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
  11.  
  12. System.Net.Http.HttpContent content = new StringContent("", UTF8Encoding.UTF8, "application/json");
  13. HttpResponseMessage messge = client.PostAsync(url, content).Result;
  14. if (messge.IsSuccessStatusCode)
  15. {
  16.       string result = messge.Content.ReadAsStringAsync().Result;
  17.       JObject jObject = JObject.Parse(result);
  18. }

Java: ExecutorService / Future

If you want to spin off a bunch of threads and manage them and their responses effectively you can do it this way.

  1. final ExecutorService executor = Executors.newFixedThreadPool(numThreads);
  2. final Collection<Future<JsonNode>> futures = new LinkedList<Future<JsonNode>>();
  3.  
  4. //Write a Loop if you want
  5. final Callable<TYPE> callable = new MyClass();
  6. futures.add(executor.submit(callable));
  7. executor.shutdown();
  8.  
  9. // We need to monitor the queries for their data being returned.
  10. for (final Future<?> future : futures) {
  11. try {
  12. final TYPE val = (TYPE) future.get();
  13. } catch (final InterruptedException e) {
  14. } catch (final ExecutionException e) {
  15. }
  16. }

 

The callable works with a class so you will need that.

  1. package mypackage;
  2.  
  3. import java.util.concurrent.Callable;
  4. import org.apache.log4j.Logger;
  5.  
  6. public class MyClass implements Callable<JsonNode> {
  7.  
  8. static final Logger logger = Logger.getLogger(MyClass.class);
  9.  
  10. MyClass() {
  11. }
  12.  
  13. /**
  14. * This is how each caller queries for the data. It can be called many times and runs on threads from the calling class
  15. * So data is returned as it gets it.
  16. */
  17. @Override
  18. public TYPE call() throws Exception {
  19. try {
  20.                   return null;
  21. } catch (final Exception e) {
  22. return null;
  23. }
  24. }
  25. }

React: Page Layout

There are many aspects of React below is just a sample of how you could setup a ReactJs page. Look up what each section does.

Go here to review the React Life Cycle. It is important to review this and understand it so that you dont make mistakes during your development.

NPM Installs:

  1. npm install create-react-class --save
  2. npm install react --save
  3. npm install react-dom --save

Class Setup:

  1. window.jQuery = window.$ = require("jquery");
  2. import React from "react";
  3. import ReactDOM from "react-dom";
  4. import "../css/newpage.css";
  5. var createReactClass = require('create-react-class');
  6.  
  7. var NewPage = createReactClass ({
  8.       getData: function() {
  9.             var params = {};
  10.             
  11.             $.ajax({
  12.                   url: "/my_web/service_method/",
  13.                   dataType: "json",
  14.                   data: params,
  15.                   success: function(data) {
  16.                         this.setState({
  17.                               "data": data
  18.                         }, () => {
  19.                               //If you want to do something after you get the data loaded
  20.                         });
  21.                   }.bind(this),
  22.                   error: function(xhr, status, err) {
  23.                         console.err("Bad");
  24.                   }.bind(this)
  25.             });
  26.       },
  27.       getInitialState: function() {
  28.             return{
  29.                   "data": [],
  30.             };
  31.       },
  32. componentDidMount: function() {
  33. },
  34. componentWillMount: function() {
  35.       this.getData();
  36. },
  37.       render: function() {
  38.  
  39.             return (
  40.                   <div key="div">
  41.                   </div>
  42.             );
  43.       }
  44. });
  45.  
  46. ReactDOM.render(<NewPage/>, document.getElementById("app-container"));

Postgres: Functions/Triggers

I usually use pgsql to write my functions.

Example Update Function:
This function could be used as a save/creation of records. It passes back a boolean and id.

  1. CREATE OR REPLACE FUNCTION Public.fn_my_function(param1 integer, param2 integer, OUT Success Boolean, OUT ID integer)
  2. AS $body$
  3. DECLARE
  4.       any_variable integer;
  5. BEGIN
  6.       $3 = True;
  7.       $4 = 3234;
  8.  
  9.       EXCEPTION
  10.             $3 = False;
  11.             $4 = -1;
  12.             WHEN OTHERS THEN
  13.             RAISE WARNING '[Public.fn_my_function] - UDF ERROR [OTHER] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
  14. END
  15. $body$ LANGUAGE plpgsql
  16. SECURITY INVOKER
  17. SET search_path = Public;

 

Get Function:
You can also return a table.
IE: RETURNS TABLE(id integer, data json) AS $body$

  1. CREATE OR REPLACE FUNCTION Public.fn_get_function()
  2. RETURNS SETOF Public.My_Table AS $body$
  3. BEGIN
  4.       RETURN QUERY
  5.       SELECT *
  6.       FROM public.my_table t
  7.       ORDER BY t.Name;
  8. END
  9. $body$ LANGUAGE plpgsql
  10. SECURITY INVOKER
  11. SET search_path = Public;

 

Comment Function:

  1. COMMENT ON FUNCTION Public.fn_my_function(integer, integer) IS '';

 

Drop Function:
Notice how I use “IF EXISTS”. This is best because if it didn’t exist your script would fail.

  1. DROP FUNCTION IF EXISTS Public.fn_my_function(Integer, Integer, Integer);

 

Trigger:

  1. CREATE OR REPLACE FUNCTION Public.fn_my_function() RETURNS TRIGGER AS $BODY$
  2. DECLARE
  3. v_old_data json;
  4. v_new_data json;
  5. BEGIN
  6. IF (TG_OP = 'UPDATE') THEN
  7.       v_old_data := row_to_json(OLD.*);
  8.       v_new_data := row_to_json(NEW.*);
  9.       RETURN NEW;
  10. ELSIF (TG_OP = 'DELETE') THEN
  11.       v_old_data := row_to_json(OLD.*);
  12.       RETURN OLD;
  13. ELSIF (TG_OP = 'INSERT') THEN
  14.       v_new_data := row_to_json(NEW.*);
  15.       RETURN NEW;
  16. ELSE
  17.       RAISE WARNING '[Public.fn_my_function] - Other action occurred: %, at %',TG_OP,now();
  18.       RETURN NULL;
  19. END IF;
  20.       
  21. EXCEPTION
  22.       WHEN data_exception THEN
  23.             RAISE WARNING '[Public.fn_my_function] - UDF ERROR [DATA EXCEPTION] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
  24.             RETURN NULL;
  25.       WHEN unique_violation THEN
  26.             RAISE WARNING '[Public.fn_my_function] - UDF ERROR [UNIQUE] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
  27.             RETURN NULL;
  28.       WHEN OTHERS THEN
  29.             RAISE WARNING '[Public.fn_my_function] - UDF ERROR [OTHER] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
  30.             RETURN NULL;
  31. END;
  32. $BODY$
  33. LANGUAGE plpgsql
  34. SECURITY INVOKER
  35. SET search_path = Public;

Postgres: Check Data Type

When you deal with json data type there is the option of using json_typeof(value). However this doesn’t always give you what you expect. For example a date field renders as text. To get around this I have the following two functions. Feel free to use them or suggest alternatives.

Test Value is Date:

  1. create or replace function is_date(s varchar) returns boolean as $$
  2. begin
  3. perform s::date;
  4. return true;
  5. exception when others then
  6. return false;
  7. end;
  8. $$ language plpgsql;

 

Test Json Value for Type:

  1. CREATE OR REPLACE FUNCTION get_json_type(s json) returns text as $$
  2. DECLARE val_type text;
  3. BEGIN
  4. IF (SELECT public.is_date(s::text)) THEN
  5. val_type := 'date';
  6. ELSEIF (json_typeof(s) = 'number') THEN
  7. val_type := 'double precision';
  8. ELSEIF (json_typeof(s) = 'string') THEN
  9. val_type := 'text';
  10. ELSEIF (json_typeof(s) = 'boolean') THEN
  11. val_type := 'boolean::integer';
  12. ELSE
  13. val_type := json_typeof(s);
  14. END IF;
  15.  
  16. return val_type;
  17. END;
  18. $$ language plpgsql;

 

Postgres: Misc

Here are some misc things you can do in postgres 9.4.

Print to Console:

  1. raise notice 'VALUE: %', value;

Json Build Object:

  1. json_build_object('name', value, 'name2': value2)

Json Array:

  1. json_agg(data)

Extract Month:
You can also extract year, day, etc.

  1. extract(month from date_column)

Json Querying:
You can extract a field from a json field. You can do:

  1. rec->>'id'
  2. rec#>'{field, sub_field,value}'
  3. rec->'object'

Row To Json:

  1. SELECT row_to_json(t)
  2. FROM mytable t

Update With FROM:

  1. UPDATE mytable l
  2. SET old_val_id=sub.new_val_id
  3. FROM (
  4. SELECT l2.id, s.id as new_val_id
  5. FROM mytable l2
  6. INNER JOIN mysupertable s ON s.id=l2.old_val_id
  7. ) sub
  8. WHERE sub.id=l.id;

Inline Script:

  1. DO $do$
  2. DECLARE id integer;
  3. BEGIN      
  4.  
  5. END $do$ LANGUAGE plpgsql;

If:

  1. IF CONDITION THEN
  2. END IF;

Let’s say you have a json field and in that field you have a field which tells you what key you should select for certain data. It could be customizable from the user entering data. To dynamically select that field you can do the below.

  1. jsonField->((to_json((jsonField->'key'->'sub_key'))#>>'{}')::text)

Upsert:
Sometimes we want to perform update if the record exists or an insert if it doesn’t. Most of the time we write a function that deals with this on a record by record basis. But what if we want to do a batch update or insert. What do we do then. We perform an upsert. See below. We write the update and return the result in the remainder of the query  we check if upsert gave us anything. If it didn’t we perform the insert.

  1. WITH upsert AS (UPDATE MyTable mt
  2. SET column_name = sub.new_value
  3. FROM (SELECT mot.id, mot.new_value
  4. FROM MyOtherTable mot
  5. ) sub
  6. WHERE sub.id=mt.related_id
  7. RETURNING *)
  8. INSERT INTO MyTable (id, column_name, related_id)
  9. SELECT ?, ?, ?
  10. WHERE NOT EXISTS (SELECT * FROM upsert)

Regex Substring:
There are different ways of using regex. This is one way.

  1. substring(column_name from '([0-9]{4}-[0-9]{2}-[0-9]{2})')

PGSQL Loop:
This is one way to loop using pgsql.

  1. DO $do$
  2. DECLARE rec RECORD;
  3. BEGIN
  4. FOR rec IN SELECT * FROM MyTable
  5. LOOP
  6. --We can then use rec like rec.colum_name
  7. END LOOP;
  8. END $do$ LANGUAGE plpgsql;

Milliseconds to timestamp:

This will return 2017-08-17 21:26:04

  1. select to_timestamp(1503005165);

Postgres: Tables

Below are some common functions for doing table creation and maintenance.

Table Creation:

  1. CREATE TABLE Public.mytable (
  2.       id BigSerial PRIMARY KEY,
  3.       text_column varchar NOT NULL,
  4.       int_column Integer NOT NULL,
  5.       date_column timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
  6. );

Create Schema:

  1. CREATE SCHEMA IF NOT EXISTS test;

Create Schema with Authorization:

  1. CREATE SCHEMA IF NOT EXISTS test AUTHORIZATION myUser;

Drop Schema Cascade:

  1. DROP SCHEMA IF EXISTS test CASCADE;

Comment On Table:

  1. COMMENT ON TABLE Public.mytable IS 'A List of data.';

Vacuum:
vacuum has options best to review them.

  1. vacuum (analyze,verbose);

Drop Constraint:

  1. ALTER TABLE mytable DROP CONSTRAINT mytable_id_pkey;

Add Constraint:

  1. ALTER TABLE public.mytable ADD CONSTRAINT mytable_id_pkey PRIMARY KEY (id);

Rename Constraint:

  1. ALTER TABLE mytable RENAME CONSTRAINT "mytable_id2_fkey" TO "mytable_id3__fkey";

Rename Table Column:

  1. ALTER TABLE mytable RENAME COLUMN text_column TO text_column2;

Rename Table:

  1. ALTER TABLE mytable RENAME TO mytable2;

Drop Table:

  1. DROP TABLE public.mytable;

Add Column to Table:

  1. ALTER TABLE Public.mytable ADD column_name boolean NOT NULL DEFAULT False;

Alter Column Data Type Json:

  1. ALTER TABLE public.mytable ALTER COLUMN json_col TYPE json USING (json_col::json);

Rename Sequence:

  1. ALTER SEQUENCE mytable_id_seq RENAME TO mytable_id_seq;

Sequence Table Owner:

  1. alter sequence mytable_id_seq owned by mytable.id;

Sequence Next Value:

  1. alter table mytable alter column mytable_id set default nextval('mytable_id_seq');

Add Foreign Key:

  1. alter table mytable ADD FOREIGN KEY (foreign_id) REFERENCES public.mytable2(foreign_id);

Create Index Json:

  1. CREATE INDEX mytable_idx ON Public.mytable((Data->'Key'->'Key'->>'value'));

Create Index:

  1. CREATE INDEX mytable_idx ON public.mytable(id);

Drop Index:

  1. DROP INDEX public.mytable_idx;

Re-Cluster Table:

  1. Cluster mytable using mytable_pkey;

Trigger:

  1. CREATE TRIGGER "tg_mytrigger" BEFORE UPDATE OF my_column OR INSERT ON public.mytable FOR EACH ROW EXECUTE PROCEDURE public.mytablestrigger();