Installing Vertica Database through Admintools

In order to install Vertica Database you need to logout from root and and login from dbadmin. Follow the below mentioned options accordingly.

  1. Logout and login as dbadmin.
  2. Run /opt/vertica/bin/adminTools as dbadmin . Following screen will appear :-
  3. From the below mentioned screen simply click on OK
    1_License_file_pathname 
  4. Select Accept and then Select OK
    2_accept
  5. Select 6 Configuration Menu
    3_select_configuration_menu
  6. Select CREATE DATABASE and Click on OK
    4_Select_Create_Database
  7. Give Database Name and Comments accordingly and then Select OK
    5_DB_Name_N_Comments
  8. Enter your desire password for the database.
    6_Enter_Pass_for_Example_DB
  9. Re-Enter the Password Againg
    7_RE_Enter_Pass_for_Example_DB
  10. Select Hosts for the database and Select OK
    8_Select_Host_Name
  11. Define Catalog Pathname and Data Pathname or you can type the below mentioned path.
    9_Define_Catalog_N_Data_Pathname
  12. A Screen will appear saying Database with 1 or 2 hosts cannot be k – safe and it may lose data if it crashes. Simply click on OK
    10_Warning_1_or_2_hosts_cannot_be_k_safe
  13. It will ask for the FINAL CONFIRMATION. Create this database ? Click on YES
    11_Final_Confirmation_for_Creating_DB
  14. Once you click on YES below mentioned screen will appear
    12_After_Clicking_on_OK
  15. Following Screen will appear and then Click on OK
    13_DB_VMart_Created_Successfully
  16. Now you need to START DATABASE. Select 3 Start Database
    14_2_Select_Start_Database
  17. Select (X) VMart and Click OK
    14_3_Select_VMart
  18. Enter the Password, the one which you have given above for database
    14_4_VMart_DB_Password
  19. Screen will appear “DATABASE VMart Started Successfully” . Click OK
    14_5_Database VMart_Started
  20. After then Go to Main Menu.
    14_Main_Menu
  21. Select EXIT.
    15_Select_Exit

 

NOTE : After following above steps, if you find VMart database is not showing then please follow the instruction according to this url.

https://verticabase.wordpress.com/2015/08/25/failed-to-create-vmart-database-hp-vertica/

Solving Vertica Installation Failed Error Solution

You will find the solution for every Error : HINT (XXXX) from below :-

ERROR : HINT (S0305): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0305

TZ is unset for dbadmin. Consider updating .profile or .bashrc

REFRENCE : http://www.ibm.com/developerworks/library/l-cpufreq-2/

ERROR S0305 SOLUTION :-

GO TO

root@storage ~]# vi /home/dbadmin/.bash_profile

AND ADD FOLLOWING

export TZ=”America/New_York”

———————————————————————————- ———————————————————————————-

HINT (S0041): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0041

Could not find the following tools normally provided by the mcelog

package: mcelog

HINT (S0041) SOLUTION :

On RedHat based systems, run the following commands as sudo or root:

yum install pstack

yum install mcelog

yum install sysstat

——————————————————————————————————————————————————————–

WARN (S0160): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0160

These disks do not have ‘ext3’ or ‘ext4’ filesystems: ‘/dev/sda1’ =

‘vfat’

WARN (S0160) SOLUTION :

——————————————————————————————————————————————————————–

FAIL (S0150): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0150

These disks do not have ‘deadline’ or ‘noop’ IO scheduling: ‘/dev/sda6’

(‘sda’) = ‘cfq’, ‘/dev/sda5’ (‘sda’) = ‘cfq’, ‘/dev/sda7’ (‘sda’) =

‘cfq’, ‘/dev/sda9’ (‘sda’) = ‘cfq’, ‘/dev/sda8’ (‘sda’) = ‘cfq’,

‘/dev/sda4’ (‘sda’) = ‘cfq’, ‘/dev/sda1’ (‘sda’) = ‘cfq’, ‘/dev/sda2’

(‘sda’) = ‘cfq’, ‘/dev/sda3’ (‘sda’) = ‘cfq’

FAIL (S0150) SOLUTION :

——————————————————————————————————————————————————————–

WARN (S0141): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0141

CPUs have discouraged cpufreq scaling policies: cpu0, cpu1, cpu2, cpu3

WARN (S0141) SOLUTION :

The only reliable method is to disable CPU scaling in BIOS.

CHECKING CPU INFORMATION

[root@storage cpufreq]# grep -E ‘^model name|^cpu MHz’ /proc/cpuinfo

model name  : Intel(R) Core(TM) i5-3230M CPU @ 2.60GHz

cpu MHz           : 1200.000

model name  : Intel(R) Core(TM) i5-3230M CPU @ 2.60GHz

cpu MHz           : 1200.000

model name  : Intel(R) Core(TM) i5-3230M CPU @ 2.60GHz

cpu MHz           : 1200.000

model name  : Intel(R) Core(TM) i5-3230M CPU @ 2.60GHz

cpu MHz           : 1200.000

[root@storage cpufreq]#

[root@storage cpufreq]# uname -a

Linux storage.castrading.com 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/Linux

[root@storage cpufreq]#

2) Check that current cpu speed differs from the maximum:

# grep -E ‘^model name|^cpu MHz’ /proc/cpuinfo

[root@storage cpufreq]# service cpuspeed start

Enabling ondemand cpu frequency scaling:                   [ OK ]

[root@storage cpufreq]# lsmod | grep ondemand

cpufreq_ondemand       10544 4

freq_table             4936 2 cpufreq_ondemand,acpi_cpufreq

[root@storage cpufreq]#

REFER URL : http://www.servernoobs.com/avoiding-cpu-speed-scaling-in-modern-linux-distributions-running-cpu-at-full-speed-tips/

——————————————————————————————————————————————————————–

FAIL (S0150): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0150

These disks do not have ‘deadline’ or ‘noop’ IO scheduling: ‘/dev/sda6’

(‘sda’) = ‘cfq’, ‘/dev/sda5’ (‘sda’) = ‘cfq’, ‘/dev/sda7’ (‘sda’) =

‘cfq’, ‘/dev/sda9’ (‘sda’) = ‘cfq’, ‘/dev/sda8’ (‘sda’) = ‘cfq’,

‘/dev/sda4’ (‘sda’) = ‘cfq’, ‘/dev/sda1’ (‘sda’) = ‘cfq’, ‘/dev/sda2’

(‘sda’) = ‘cfq’, ‘/dev/sda3’ (‘sda’) = ‘cfq’

FAIL (S0150) SOLUTION :

RedHat and SuSE Based Systems

For each drive in the HP Vertica system, HP Vertica recommends that you set the readahead value to 2048 for most deployments.

The command immediately changes the readahead value for the specified disk. The second line adds the command to /etc/rc.local so

that the setting is applied each time the system is booted. Note that some deployments may require a higher value and the setting can

be set as high as 8192, under guidance of support.

Note: For systems that do not support /etc/rc.local, use the equivalent startup script that is run after the destination runlevel has been reached.

For example SuSE uses /etc/init.d/after.local.

[root@storage cpufreq]# /sbin/blockdev –setra 2048 /dev/sda

[root@storage cpufreq]# echo ‘/sbin/blockdev –setra 2048 /dev/sda’ >> /etc/rc.local

[root@storage cpufreq]#

——————————————————————————————————————————————————————–

FAIL (S0030): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0030

ntpd process is not running: [‘ntpd’, ‘ntp’]

FAIL (S0030) SOLUTION :

For RedHat and SuSE based systems, simply use the service and chkconfig utilities to start NTP and have it start at boot time.

[root@storage cpufreq]# /sbin/chkconfig ntpd on

[root@storage cpufreq]# /sbin/service ntpd restart

Shutting down ntpd:                                       [ OK ]

Starting ntpd:                                            [ OK ]

[root@storage cpufreq]#

——————————————————————————————————————————————————————–

FAIL (S0310): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0310

Transparent hugepages is set to ‘always’. Must be ‘never’ or ‘madvise’.

FAIL (S0310) SOLUTION :

RedHat Systems

To determine if transparent hugepages is enabled, run the following command. The setting returned in brackets is your current setting.

[root@storage cpufreq]# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

[always] madvise never

Edit your boot loader (for example /etc/grub.conf), typically you add the following to the end of the kernel line. However, consult the documentation for your system before editing your boot loader configuration.

transparent_hugepage=never

[root@storage cpufreq]# vi /etc/grub.conf

[root@storage cpufreq]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled

——————————————————————————————————————————————————————–

FAIL (S0081) SOLUTION :

Edit /etc/selinux/config

setenforce 0

——————————————————————————————————————————————————————–

FAIL (S0150): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0150

These disks do not have ‘deadline’ or ‘noop’ IO scheduling: ‘/dev/sda6’

(‘sda’) = ‘cfq’, ‘/dev/sda5’ (‘sda’) = ‘cfq’, ‘/dev/sda7’ (‘sda’) =

‘cfq’, ‘/dev/sda9’ (‘sda’) = ‘cfq’, ‘/dev/sda8’ (‘sda’) = ‘cfq’,

‘/dev/sda4’ (‘sda’) = ‘cfq’, ‘/dev/sda1’ (‘sda’) = ‘cfq’, ‘/dev/sda2’

(‘sda’) = ‘cfq’, ‘/dev/sda3’ (‘sda’) = ‘cfq’

FAIL (S0150) ANSWER :

cat /sys/block/sda/queue/scheduler noop deadline [cfq]

http://www.aodba.com/en/blog/2015/02/24/hp-vertica-installation-failed-errors/

——————————————————————————————————————————————————————

WARN (S0141): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0141

CPUs have discouraged cpufreq scaling policies: cpu0, cpu1, cpu2, cpu3

WARN (S0141) SOLUTION :-

1002 uname -a

1003 pgrep -lf

1004 pgrep -flvx

1005 for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done

1006 grep -E ‘^model name|^cpu MHz’ /proc/cpuinfo

1007 service cpuspeed stop

1008   rmmod cpufreq_ondemand acpi_cpufreq freq_table

1009 rmmod cpufreq_ondemand acpi_cpufreq freq_table

1010 lsmod | grep ondemand

1011 for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done

1012 /opt/vertica/sbin/install_vertica -s storage.castrading.com -r /root/Downloads/vertica-7.1.1-0.x86_64.RHEL5.rpm -u dbadmin

——————————————————————————————————————————————————————

WARN (S0160): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0160

These disks do not have ‘ext3’ or ‘ext4’ filesystems: ‘/dev/sda1’ =

‘vfat’

WARN (S0160) SOLUTION : If you check /dev/sda1 TYPE from below then it is VFAT type, I need to convert VFAT TO EXT4 NOW

1 STEP :

[root@storage /]# df -T

Filesystem     Type 1K-blocks     Used Available Use% Mounted on

/dev/sda3     ext4   30237648 25919036   2782612 91% /

tmpfs         tmpfs   1935272     880   1934392   1% /dev/shm

/dev/sda2     ext4   5039616   164920   4618696   4% /boot

/dev/sda1     vfat   4087992     264   4087728   1% /boot/efi

/dev/sda4     ext4   30237648 8515920 20185728 30% /home

/dev/sda8     ext4   15118728   684152 13666576   5% /opt

/dev/sda9    ext4   8063408   164468   7489340   3% /tmp

/dev/sda7     ext4   20158332 4023220 15111112 22% /usr

/dev/sda5     ext4   25197676   309188 23608488   2% /usr/local

/dev/sda6     ext4   25197676   513540 23404136   3% /var

NOTE : COPY /boot folder or take the backup of /boot folder

3RD STEP :

[root@storage ~]# mount | grep sda1

[root@storage ~]# umount /boot/efi/

[root@storage ~]# mkfs.ext4 /dev/sda1

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=1 blocks, Stripe width=0 blocks

256000 inodes, 1024000 blocks

51200 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=1048576000

32 block groups

32768 blocks per group, 32768 fragments per group

8000 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done

Creating journal (16384 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 23 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@storage ~]# mount /dev/sda1 /boot/efi/

[root@storage ~]#

NOTE : SIMPLY COPY AND PASTE THE BOOT FOLDER

AND RUN THE VERTICA SCRIP AGAIN :-

[root@storage ~]# /opt/vertica/sbin/install_vertica -s storage.castrading.com -r /root/Downloads/vertica-7.1.1-0.x86_64.RHEL5.rpm -u dbadmin

——————————————————————————————————————————————————————-

FAIL (S0150): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0150

These disks do not have ‘deadline’ or ‘noop’ IO scheduling: ‘/dev/sda1’

(‘sda’) = ‘cfq’, ‘/dev/sda6’ (‘sda’) = ‘cfq’, ‘/dev/sda5’ (‘sda’) =

‘cfq’, ‘/dev/sda7’ (‘sda’) = ‘cfq’, ‘/dev/sda9’ (‘sda’) = ‘cfq’,

‘/dev/sda8’ (‘sda’) = ‘cfq’, ‘/dev/sda4’ (‘sda’) = ‘cfq’, ‘/dev/sda2’

(‘sda’) = ‘cfq’, ‘/dev/sda3’ (‘sda’) = ‘cfq’

FAIL (S0150) SOLUTION :

Vertica requires either deadline or noop I/0 scheduler. And CFQ creates performance bottlenecks.

Permanent setting can be done by adding the following in kernel cmd line in vim /etc/grub.conf

elevator=deadline

Configure the I/O Scheduler – Changing the Scheduler Through the /sys Directory

  • Check the current status:

$ cat /sys/block/sda/queue/scheduler

noop anticipatory deadline [cfq]

  • Change for the current run:

$ echo deadline > /sys/block/sda/queue/scheduler

  • Add the change to rc.local so it survives reboot:

$ echo ‘echo deadline > /sys/block/sda/queue/scheduler’ >> /etc/rc.local

  • Check the status:

$ cat /sys/block/sda/queue/scheduler

noop anticipatory [deadline] cfq

$ find / -iname “scheduler”

Ok, we have sda device. You can confirm it by: blkid command or fdisk -l

NOTE: There should not be any LVM partitions associated with vertica

Make the change to /dev/mapper/VolGroup-lv_root as well

so Check the dev device name for partition like /dev/mapper/VolGroup-lv_root, lvdisplay -v or ll /dev/VolGroup/

In my case it is -> Block device 253:0s

Unable to resolve this one as LVM was used and had to reinstall my OS with EXT4

[root@storage ~]# /opt/vertica/sbin/install_vertica -s storage.castrading.com -r /root/Downloads/vertica-7.1.1-0.x86_64.RHEL5.rpm -u dbadmin

——————————————————————————————————————————————————————-

FAIL (S0310): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0310

Transparent hugepages is set to ‘always’. Must be ‘never’ or ‘madvise’.

FAIL (S0310) SOLUTION :

To determine if transparent hugepages is enabled, run the following command. The setting returned in brackets is your current setting.

FIRST — cat /sys/kernel/mm/redhat_transparent_hugepage/enabled

[always] madvise never

You can disable transparent hugepages one of two ways:

SECOND –Edit your boot loader (for example /etc/grub.conf), typically you add the following to the end of

the kernel line. However, consult the documentation for your system before editing your boot loader

configuration.

transparent_hugepage=never

NOTE : YOU CAN FOLLOW ANY ONE FROM SECOND AND THIRD STEPS

THIRD — Or, edit /etc/rc.local and add the following script.

Note: For systems that do not support /etc/rc.local, use the equivalent startup script that is run after the destination runlevel has been reached. For example SuSE uses /etc/init.d/after.local.

if test -f /sys/kernel/mm/redhat_transparent_hugepage/enabled; then

echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled

fi

NOTE : You must reboot your system for the setting to take effect, or

run the following two echo lines to proceed with the install without rebooting:

echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled

[root@storage etc]# opt/vertica/sbin/install_vertica -s storage.castrading.com -r /root/Downloads/vertica-7.1.1-0.x86_64.RHEL5.rpm -u dbadmin

===================================================================================================

Please evaluate your hardware using Vertica’s validation tools:

https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=VALSCRIPT

How to Install Hp Vertica on RHEL 6, RHEL 7 (Single Node) ?

General Information :

The HP Vertica Analytic Database is based on a massively parallel processing (MPP), shared-nothing architecture, in which the query processing workload is divided among all nodes of the Vertica database. HP highly recommends using a homogeneous hardware configuration for your HP Vertica cluster; that is, each node of the cluster should be similar in CPU, clock speed, number of cores, memory, and operating system version.

Note : Please make sure that Linux OS is installed in your machine. For Clustering, Vertica required minimum three (3) nodes. Therefore, You need to have three Linux OS on three different nodes. But In our case We will try to install Vertica on Single Node only. Before you installed please read the information for Filesystem Options and General Platform Recomendations :

Filesystem Options :
HP Vertica supports the ext3 and ext4 file systems.
Linux Logical Volume Manager (LVM) is not supported.
The recommended disk block size is 4096 bytes.

General Platform Recommendations :
ext4 is recommended over ext3 for performance reasons.
Use 2GB of swap space regardless of the amount of installed RAM.
Place the database /catalog directory on the same drive as the OS.

Now follow the below mentioned instruction for Vertica Installation :

  1. Download Vertica. I will be downloading version 7.1.1 :::  vertica-7.1.1-0.x86_64.RHEL5.rpmvertic
  2. Click here to download vertica-7.1.1-0.x86_64.RHEL5.rpm

3.  [root@storage1 Downloads]# rpm -Uvh vertica-7.1.1-0.x86_64.RHEL5.rpm

Preparing…               ########################################### [100%]

1:vertica               ########################################### [100%]

Vertica Analytic Database V7.1.1-0 successfully installed on host storage1.example.com

———————————————————————————-

Important Information

———————————————————————————-

If you are upgrading from a previous version, you must backup your database before

continuing with this install. After restarting your database, you will be unable

to revert to a previous version of the software.

———————————————————————————-

To download the latest Vertica documentation in zip or tar format please visit the

myvertica web site.

To complete installation and configuration of the cluster,

run: /opt/vertica/sbin/install_vertica

Step 4 :  [root@storage Downloads]# /opt/vertica/sbin/install_vertica -s storage.example.com -r /root/Downloads/vertica-7.1.1-0.x86_64.RHEL5.rpm

Vertica Analytic Database 7.1.1-0 Installation Tool

>> Validating options…

Mapping hostnames in –hosts (-s) to addresses…

storage.example.com         => 192.168.0.1

>> Starting installation tasks.

>> Getting system information for cluster (this may take a while)…

Default shell on nodes:

192.168.0.1 /bin/bash

>> Validating software versions (rpm or deb)…

>> Beginning new cluster creation…

backing up admintools.conf on 192.168.0.1

>> Creating or validating DB Admin user/group…

Password for new dbadmin user (empty = disabled)

Successful on hosts (1): 192.168.0.1

Provided DB Admin account details: user = dbadmin, group = verticadba, home = /home/dbadmin

Creating group… Adding group

Validating group… Okay

Creating user… Adding user, Setting credentials

Validating user… Okay

>> Validating node and cluster prerequisites…

Prerequisites not fully met during local (OS) configuration for

verify-192.168.0.1.xml:

HINT (S0305): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0305

TZ is unset for dbadmin. Consider updating .profile or .bashrc

REFRENCE : http://www.ibm.com/developerworks/library/l-cpufreq-2/

HINT (S0041): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0041

Could not find the following tools normally provided by the mcelog

package: mcelog

WARN (S0141): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0141

CPUs have discouraged cpufreq scaling policies: cpu0, cpu1, cpu2, cpu3

WARN (S0160): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0160

These disks do not have ‘ext3’ or ‘ext4’ filesystems: ‘/dev/sda1’ =

‘vfat’

FAIL (S0150): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0150

These disks do not have ‘deadline’ or ‘noop’ IO scheduling: ‘/dev/sda6’

(‘sda’) = ‘cfq’, ‘/dev/sda5’ (‘sda’) = ‘cfq’, ‘/dev/sda7’ (‘sda’) =

‘cfq’, ‘/dev/sda9’ (‘sda’) = ‘cfq’, ‘/dev/sda8’ (‘sda’) = ‘cfq’,

‘/dev/sda4’ (‘sda’) = ‘cfq’, ‘/dev/sda1’ (‘sda’) = ‘cfq’, ‘/dev/sda2’

(‘sda’) = ‘cfq’, ‘/dev/sda3’ (‘sda’) = ‘cfq’

FAIL (S0020): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0020

Readahead size of sda (/dev/sda6,/dev/sda5,/dev/sda7,/dev/sda9,/dev/sda8

,/dev/sda4,/dev/sda1,/dev/sda2,/dev/sda3) is too low for typical

systems: 256 < 2048

FAIL (S0030): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0030

ntpd process is not running: [‘ntpd’, ‘ntp’]

FAIL (S0310): https://my.vertica.com/docs/7.1.x/HTML/index.htm#cshid=S0310

Transparent hugepages is set to ‘always’. Must be ‘never’ or ‘madvise’.

System prerequisites failed. Threshold = WARN

Hint: Fix above failures or use –failure-threshold

Installation FAILED with errors.

****

AdminTools and your existing Vertica databases may be unavailable.

Investigate the above warnings/errors and re-run installation.

****

NOTE : In order to solve the above error. Please follow this url. You will find the solution answer accordingly.

https://verticabase.wordpress.com/2015/08/25/solving-vertica-installation-failed-error-solution/

After You Solved all above error your need to Install Vertica Database.

Click here to INSTALL VERTICA DATABASE

What is Cluster ?

Every Column based data is distributed in all the nodes in every cluster, so in case if one node failed then the database continues to operate. Similarly when a node added on cluster comes back online after being failed then it automatically signal other nodes to updates its local data on the node which got online after failed status.

CLUSTER CONCEPT

Before installing VERTICA you must first configure your cluster. A cluster is made up of one or more physical host. Machines are often configured as per the documentation to run the VERTICA. These hosts have a name and static ip address into which do not share either disk space or main memory. On each host edit the file /etc/hosts so that the machine can identify each other. There are also dedicated ports on each host that must be left open for VERTICA communication. The full list of reserve port number is in the installation guide.

ONE

On every cluster host the root user is required to install both VERTICA ANALYTICS and the MANAGEMENT CONSOLE. In additional to other CLUSTER CONFIGURATION steps you must insure that the root user can use SSH to login to all host and the cluster. At least two other users must be created to install in one VERTICA. The user DBADMIN is the unique OWNER of the VERTICA DATABASE. Another USER which we have named MCADMIN owns and controls the MANAGEMENT CONSOLE. Both of these users must be in UNIX group VERTICADBA.

two

On one of the host create a directory to hold the VERTICA DATABASE and MANAGEMENT CONSOLE INSTALLATION FILES. The directory should be owned by the user ROOT. It does not matter which HOST you create the directory on, VERTICA treats them all equally.

three

On all of the host create a directory to hold VERTICA DATABASE which would be distributed across all host. The directory should be owned by the user DBADMIN.

four

The VERTICA DATABASE SERVER and MANAGEMENT CONSOLE installation package are available from the DOWNLOADS page of myvertica.com

 

 

Failed To Create VMart Database – HP Vertica

If  VMart does not appeared after installing HP Vertica database. Then Please follow the following Steps

First

Run Admintoolstwo

1. [dbadmin@example ~]$ admintools

2. Select 6 CONFIGURATION MENU

3. Select 7 DROP DATABASE

4. Selected M Menu

5. Selected E Exit

 

Second : Type Following :

[dbadmin@example ~]$ /opt/vertica/sbin/install_example VMart
Installing VMart example database
Thu May 21 11:13:00 EDT 2015
Creating Database
Completed
Generating Data. This may take a few minutes.
Completed
Creating schema
Completed
Loading 5 million rows of data. Please stand by.
Completed
Removing generated data files
Example database creation complete

Log is located in /opt/vertica/examples/log/ExampleInstall.txt

Thu May 21 11:14:11 EDT 2015
[dbadmin@example ~]$

And It will now shows all the tables. In order to see the detail information go to VSQL and simply type \d or Follow the below mentioned steps :

[dbadmin@example ~]$ vsql
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit

dbadmin=> \d
List of tables
Schema    |         Name          | Kind  |  Owner  | Comment
————–+———————–+——-+———+———
online_sales | call_center_dimension | table | dbadmin |
online_sales | online_page_dimension | table | dbadmin |
online_sales | online_sales_fact     | table | dbadmin |
public       | customer_dimension    | table | dbadmin |
public       | date_dimension        | table | dbadmin |
public       | employee_dimension    | table | dbadmin |
public       | inventory_fact        | table | dbadmin |
public       | product_dimension     | table | dbadmin |
public       | promotion_dimension   | table | dbadmin |
public       | shipping_dimension    | table | dbadmin |
public       | vendor_dimension      | table | dbadmin |
public       | warehouse_dimension   | table | dbadmin |
store        | store_dimension       | table | dbadmin |
store        | store_orders_fact     | table | dbadmin |
store        | store_sales_fact      | table | dbadmin |
(15 rows)

dbadmin=>