Category Archives: centos6

CentOS 6: Install Samba

Versions

  1. CentOS 6.4
  2. Samba 3.6.9
  3. Windows 7

Install

Configure

  1. Add linux user for samba access:
    Note
    Substitute $SAMBA_USER for samba username

  2. Set linux user password:

  3. Add linux user to samba (use same password)

  4. Optional: Add a path to share and change ownership to samba user:
    Note
    Substitute $PATH with the absolute path to the directory to share

    Note
    home directories are shared by the default samba config
  5. Optional: Set selinux context for directory to “samba_share_t” to permit sharing:

  6. Optional: Set selinux bool to make home directories shareable:

  7. Edit /etc/samba/smb.conf and change the workgroup to match the Windows 7 workgroup:
    Note
    Substitute $WG with the name of the workgroup

    Note
    if not configured, Windows 7 defaults to the workgroup name “workgroup”
  8. Add the following to the end of /etc/samba/smb.conf:
    Note
    Substitute $SHARE_NAME with the name of the share

  9. Start services

  10. Start on boot

  11. Open the following ports for samba:

Test

  1. On Windows 7, open command-line and map drive:
    Note
    Substitute $HOSTNAME for the hostname of the samba machine, $PASSWORD with the password of the samba user

    Note

    It may be necessary to run

    to remove active connections and clear cached connection information if you have attempted to connect to the CentOS machine before.

CentOS 6.5: Virtualbox Guest Additions


Versions

  1. CentOS 6.5
  2. Virtualbox 4.3.12

Prerequisites

  1. Install CentOS as Virtualbox Guest

Install Prereqs


Configure

Get the kernel version:

Note

VBoxLinuxAdditions.sh looks for the kernel source at $KERN_DIR/${uname -r}. The cp below is to make the directory structure meet its expectations (the default kernel sources folder includes the minor version)


Install


Verify


Cleanup


Reboot

[DEPRECATED] CentOS 6: Install Datastax Cassandra OpsCenter Community Edition

Versions

  • CentOS 6.4
  • Oracle Java JDK 1.6
  • OpsCenter Community 3.0.2

Prerequisites

Install

1. Edit /etc/yum.repos.d/datastax.repo

2. Install Opscenter Free

Configure

Note: by default opscener will only accept connections from 127.0.0.1:8888. This guide won’t change that setting but it can be changed by editing /etc/opscenter/opscenterd.conf

1. Start opscenter:

4. Start cassandra on boot:

Test

1. Reconnect ssh session and tunnel the default opscenter port (8888):

2. Connect to 127.0.0.1:8888 with a browser
3. Click “Use Existing Cluster”
DataStax OpsCenter - Google Chrome_026
4. Enter a seed cluster node hostname or IP
Note: you can enter a new-line separated list of all nodes, but this is unnecessary
DataStax OpsCenter - Google Chrome_027
5. Click “Save cluster”
DataStax OpsCenter - Google Chrome_028

Sources

[DEPRECATED] CentOS 6: Configure PAM authentication using Keroberos Tickets

Versions

  • CentOS 6.4

Prerequsities

Install

Configure

Test

1. Add Kerberos authenticated user as an admin

2. Verify by logging in as the user.

Sources

[DEPRECATED] CentOS 6: Install Single-node Hadoop from Cloudera CDH

Overview

Guide for setting up a single-node Hadoop on CentOS using the Cloudera CDH repository.

Versions

  • CentOS 6.4
  • Oracle Java JDK 1.6
  • CDH 4
  • Hadoop 0.2

Prerequisties

Install

1. Download the yum repo file:

2. Install

Configure

1. Format the name node

Output:

2. Start namenode/datanode services

3. Optional: Start services on boot

4. Create directories

5. Create map/reduce directories

6. Start map/reduce services

7. Optional: Start services on boot

8. Optional: Create a home directory on the hdfs for the current user

9. Edit /etc/profile.d/hadoop.sh

10. Load into session

Test

1. Get a directory listing from hadoop hdfs

Output:

Note: results will vary based on user directories created

2. Navigate browser to http://<hostname>:50070
Hadoop NameNode localhost:8020 - Google Chrome_024

4. Navigate browser to http://<hostname>:50030
localhost Hadoop Map-Reduce Administration - Google Chrome_023

3. Run one of the examples

Output:

Sources

CentOS 6: Open a Port for iptables

Verisons

  1. CentOS 6.5

Configure

  1. Edit /etc/sysconfig/iptables and add the following before COMMIT
    Typical /etc/sysconfig/iptables:

  2. If opening TCP port, add the following line above the first reject statement:

    Note
    if tcp or udp wasn’t specified assume tcp
  3. If udp, instead add the following line above the first reject statement:

    Example: Open port 666 for tcp

  4. Restart iptables

Test from remote machine

Note
replace $HOSTNAME and $PORT below

CentOS 6: Install MongoDB

Versions

  1. CentOS 6.5
  2. MongoDB 2.4.10

Install

  1. Edit /etc/yum.repos.d/10gen.repo

  2. Install

Configure

  1. Start service

  2. Start on boot

Test

Optional: Open firewall port

  1. Open port 27017 for mongodb See CentOS 6: Open a port for iptables

[DEPRECATED] CentOS 6: Install Hadoop from Apache Bigtop

WARNING

This guide is a work-in-progress and currently does not result in a fully working Hadoop. Please see CentOS 6: Install Single-node Hadoop from Cloudera CDH

Overview

Guide for setting up a single-node Hadoop on CentOS using the Apache Bigtop repo.

Versions

  • CentOS 6.3
  • Oracle Java JDK 1.6
  • Apache BigTop 0.5.0
  • Hadoop 2.0.2-alpha

Prerequisties

Install

1. Download the yum repo file:

2. Install

Configure

Separate where the namenode and datanode store their files

1. Edit /etc/hadoop/conf/hdfs-site.xml and change the following properties to the listing below:

  • dfs.namenode.name.dir
  • dfs.namenode.checkpoint.dir
  • dfs.datanode.data.dir

Note: this step is not part of the official Apache BigTop instructions, but was required to avoid errors when running a datanode on the same machine as the namenode.

2. Format the name node

Output:

Note: formatting the datanode is not required, *however* if you have a previous install, you may have to to remove /var/lib/hadoop-hdfs/datanode to clear locks

3. Start hadoop namenode and datanode

TODO: figure out why hadoop-hdfs-zkfc doesn’t start
4. Start services on boot

5. Optional: Create a home directory on the hdfs

6. Edit /etc/profile.d/hadoop.sh

7. Load into session

Test

1. Download the examples (they are missing 2.0.2-alpha for some reason)

2. Get a directory listing from hadoop hdfs

3. Run one of the examples

TODO: while the cluster appears to be working, this example hangs. :[

4. Navigate browser to http://<hostname>:50070
Hadoop NameNode localhost:8020 - Google Chrome_021
5. Click on “Live Nodes”
Hadoop NameNode localhost:8020 - Google Chrome_022

Sources

[DEPRECATED] CentOS 6: Install wso2server.sh as a service that starts on boot

Versions

  • CentOS 6.3
  • Oracle JDK 1.6

Configure

Note: Replace <product> with the abbreviated name of the WSO2 product (API Manager = ‘am’, Data Services Server = ‘dss’, etc) and replace <version> with the version number (’1.3.1′, ’4.0.6′, etc)

1. Edit /etc/init.d/wso2<product>

Note: the script hard codes JAVA_HOME, but this can be deferred to the /etc/sysconfig/wso2<product> script (which you must create) if desired. Note that the normal /etc/profile.sh is *not* run for services.
TODO: find better way of setting JAVA_HOME, can’t execute /etc/profile or /etc/profile.d/java.sh directly

2. Make it executable

2. Start service on boot

Verify

1. Start service

Source

CentOS 6: Clear the yum cache

Versions

  1. CentOS 6.3

Overview

Ran into problems tonight after working installs of Hadoop of different versions where the yum installer would try to download the incorrect version. Required cleaning yum’s caches AND manually deleting the yum cache for the repo in /var/cache/yum.

Guide

  1. Remove the repo
    Note
    replace $REPONAME below with the name of the repo to clear

  2. Clean all

  3. Delete the yum cache for the repo