[DEPRECATED] CentOS 6: Install MongoDB Java REST server

Verisons

  • CentOS 6.4
  • Oracle Java 1.6 JDK
  • MongoDB 2.4

Prerequisites

Install

1. Download, extract tar and mv to /usr/lib

Install as a service

1. Edit /etc/init.d/mongoser

Note: the script hard codes JAVA_HOME, but this could be deferred to the /etc/sysconfig/mongoser script (which you must create) if desired. Note that the normal /etc/profile.sh is *not* run for services.
2. Make it executable

3. Start service

4. Start service on boot

Test

1. Get a list of databases

Output:

Source

CentOS 6: Open a Port for iptables

Verisons

  1. CentOS 6.5

Configure

  1. Edit /etc/sysconfig/iptables and add the following before COMMIT
    Typical /etc/sysconfig/iptables:

  2. If opening TCP port, add the following line above the first reject statement:

    Note
    if tcp or udp wasn’t specified assume tcp
  3. If udp, instead add the following line above the first reject statement:

    Example: Open port 666 for tcp

  4. Restart iptables

Test from remote machine

Note
replace $HOSTNAME and $PORT below

CentOS 6: Install MongoDB

Versions

  1. CentOS 6.5
  2. MongoDB 2.4.10

Install

  1. Edit /etc/yum.repos.d/10gen.repo

  2. Install

Configure

  1. Start service

  2. Start on boot

Test

Optional: Open firewall port

  1. Open port 27017 for mongodb See CentOS 6: Open a port for iptables

[DEPRECATED] CentOS 6: Install Hadoop from Apache Bigtop

WARNING

This guide is a work-in-progress and currently does not result in a fully working Hadoop. Please see CentOS 6: Install Single-node Hadoop from Cloudera CDH

Overview

Guide for setting up a single-node Hadoop on CentOS using the Apache Bigtop repo.

Versions

  • CentOS 6.3
  • Oracle Java JDK 1.6
  • Apache BigTop 0.5.0
  • Hadoop 2.0.2-alpha

Prerequisties

Install

1. Download the yum repo file:

2. Install

Configure

Separate where the namenode and datanode store their files

1. Edit /etc/hadoop/conf/hdfs-site.xml and change the following properties to the listing below:

  • dfs.namenode.name.dir
  • dfs.namenode.checkpoint.dir
  • dfs.datanode.data.dir

Note: this step is not part of the official Apache BigTop instructions, but was required to avoid errors when running a datanode on the same machine as the namenode.

2. Format the name node

Output:

Note: formatting the datanode is not required, *however* if you have a previous install, you may have to to remove /var/lib/hadoop-hdfs/datanode to clear locks

3. Start hadoop namenode and datanode

TODO: figure out why hadoop-hdfs-zkfc doesn’t start
4. Start services on boot

5. Optional: Create a home directory on the hdfs

6. Edit /etc/profile.d/hadoop.sh

7. Load into session

Test

1. Download the examples (they are missing 2.0.2-alpha for some reason)

2. Get a directory listing from hadoop hdfs

3. Run one of the examples

TODO: while the cluster appears to be working, this example hangs. :[

4. Navigate browser to http://<hostname>:50070
Hadoop NameNode localhost:8020 - Google Chrome_021
5. Click on “Live Nodes”
Hadoop NameNode localhost:8020 - Google Chrome_022

Sources

[DEPRECATED] CentOS 6: Install wso2server.sh as a service that starts on boot

Versions

  • CentOS 6.3
  • Oracle JDK 1.6

Configure

Note: Replace <product> with the abbreviated name of the WSO2 product (API Manager = ‘am’, Data Services Server = ‘dss’, etc) and replace <version> with the version number (’1.3.1′, ’4.0.6′, etc)

1. Edit /etc/init.d/wso2<product>

Note: the script hard codes JAVA_HOME, but this can be deferred to the /etc/sysconfig/wso2<product> script (which you must create) if desired. Note that the normal /etc/profile.sh is *not* run for services.
TODO: find better way of setting JAVA_HOME, can’t execute /etc/profile or /etc/profile.d/java.sh directly

2. Make it executable

2. Start service on boot

Verify

1. Start service

Source

[DEPRECATED] WSO2: Use LDAP as the Carbon user-store for any WSO2 product

Tested Products

  • WSO2 Data Services Server 3.0.1
  • WSO2 API Manager 1.3.1

Overview

All WSO2 Carbon-based products can be configured to work with LDAP simply by changing the configuration files. Out of the box Carbon uses one H2 database as a user-store that stores usernames, passwords, etc and another H2 database to store roles and permissions. This guide is for replacing the first database with LDAP. This configuration has been tested with both the WSO2 API Manager and WOS2 Data Services Server.

1. Import LDAP server PEM file into Java trust store

  • Default Carbon trust store: /repository/resources/security/client-truststore.jks
  • Defatul Carbon trust store password: wso2carbon

2. Edit <carbon-home>/repository/conf/user-mgt.xml

Note1: The password field /UserManager/Realm/Configuration/AdminRole/AdminUser/Password has no effect since the user-store is external and pre-configured.
Note2: The admin user specified at /UserManager/Realm/Configuration/AdminRole/AdminUser must be the first account to log in. Other users will not be able to log in until they are assigned a WSO2 role that has authentication privileges.
Note3: The connection name must exist in the UserSearchBase.
Note4: The user specified by /UserManager/Realm/Configuration/UserStoreManager/Property[@name="ConnectionName"] does not need to be the LDAP admin. However, it must have sufficient privileges to search all accounts that need to be authenticated.
Note5: /UserManager/Realm/Configuration/UserStoreManager/Property[@name="ReadLDAPGroups"] determines if Carbon will retain its own roles or use the LDAP server’s groups.
3. Change default admin account for Carbon applications
Note1: the file will vary depending on the WSO2 product being configured
Note2: not all WSO2 products require this (DSS does not)

API Manager

Set the username at the following XPaths:

  • /APIManager/AuthManager/Username/text()
  • /APIManager/APIGateway/Username/text()
  • /APIManager/APIKeyManager/Username/text()

Set the password at the following XPaths:

  • /APIManager/AuthManager/Password/text()
  • /APIManager/APIGateway/Password/text()
  • /APIManager/APIKeyManager/Password/text()

Sources

CentOS 6: Clear the yum cache

Versions

  1. CentOS 6.3

Overview

Ran into problems tonight after working installs of Hadoop of different versions where the yum installer would try to download the incorrect version. Required cleaning yum’s caches AND manually deleting the yum cache for the repo in /var/cache/yum.

Guide

  1. Remove the repo
    Note
    replace $REPONAME below with the name of the repo to clear

  2. Clean all

  3. Delete the yum cache for the repo

Linux: Search and replace in files

Overview

Just a quick recipe for performing search & replace on many files at once.

Recipe

Note
  • Replace $NAME with find match specifier, e.g. “*.php”
  • Replace $SEARCH_LITERAL and $REPLACE_LITERAL with unquoted search/replace strings
    • Search/replace strings must escape forward-slash(/) with back-slash(\), e.g. http:\/\/asdf.com
Recipe

Example to perform replace all occurrences of foo with bar in all php files

Linux: Test a SOAP web service using curl

Curl is a linux command-line HTTP tool.

Sample SOAP Message:

Sample curl command to transmit SOAP message to a SOAP service (with –data curl will automatically POST):

Note
replace $HOSTNAME,$PORT and $SOMEPATH below

SBT: Standard build.sbt for Scala


Version

  • Scala 2.11.0

A basic build.sbt template:

Note

replace XXX below

Example: