XTR Raid 10 install with 1.6.3 SB CD [Failed raid ??]

Posts related specifically to running Strongbolt v1.06 on RaQ XTR hardware should go here - please include any BUG reports here as well.

Moderator: LiamM

XTR Raid 10 install with 1.6.3 SB CD [Failed raid ??]

Postby KlausGimm on Wed Feb 13, 2008 7:42 pm

Aloah folks !

I finally safed up enough cash for 4 new hdds for the xtr and went back to the little server.

It took 3 tries to get the installer running though. The first two died during rpm strapping, with the odd message

"Connection lost to 192.168.0.100 - still trying. "

Or something to that effect anyway.

Rebooted the install server and it went though. The System info shows home with a little bit over 400 gb, wich would be correct for 4 x 250gb sp2514n hdds.

How ever the big money question is, did the raid 10 install work correctly or did the installer just put in a raid 0 ?

Is there a command to run or a config file to look at ? I think to remember haveing red somewhere linux has a raid config file somewhere detailing about it but i cannot find it anymore.

Hoping someone will enlighten me on the subject.

Would be very cool to get a confirmation raid 10 is correctly in place.

Sorry for such a dummy question but i am far away from beeing a experienced linux user :/


Thanks alot in advance.

Best regards to everyone

Klaus

P.S: Hey Jim and Tim, any chance the next release version of SB will have front panel LED support for the XTR ? That would be awesome !!
Last edited by KlausGimm on Wed Feb 13, 2008 9:13 pm, edited 1 time in total.
Symantec Velociraptor 1300 - 2.10.3 ext3
Strongbolt (1.06.03) - rock stable Raid 10
4x Samsung SP2514N
Single CPU 1GHz
(Dual CPU leads to memory errors)
KlausGimm
Legendary Forum member
 
Posts: 263
Joined: Fri Oct 27, 2006 3:25 pm
Location: Germany

Postby KlausGimm on Wed Feb 13, 2008 7:59 pm

Hmm odd

Now Active Monitor - Status - Disk integrety tells me :

Code: Select all

Redundant Array of Independent Disks (RAID) Status Details
   Current Status
   
Severe Problem        
Your system is configured for disk mirroring (RAID 1) using 2 disks.
A hard drive has failed. Please shutdown the server appliance and replace the failed hard drive with a new one that is the same size as the remaining drive. Data will be restored to the replacement hard drive automatically.
   Status Last Changed
   
February 13 2008 8:57 PM

Drive Status Details
   Current Status
   
Severe Problem        One or more of the disks is having a problem. The illustration below shows the location of the disks. Move the mouse over the disk image to see more details. /dev/md
   Status Last Changed
   
February 13 2008 8:57 PM



How ever there is unfortunatly no graphic visible.
All 4 hdds have been brand new.
Its possible that one has a knack but not all too likely .
Could this indicate an other problem ?

If not how to determine wich hdd is supposedly dead ?

Best regards

Klaus
Symantec Velociraptor 1300 - 2.10.3 ext3
Strongbolt (1.06.03) - rock stable Raid 10
4x Samsung SP2514N
Single CPU 1GHz
(Dual CPU leads to memory errors)
KlausGimm
Legendary Forum member
 
Posts: 263
Joined: Fri Oct 27, 2006 3:25 pm
Location: Germany

Postby TimSB on Thu Feb 14, 2008 12:01 am

Hi Klaus,

I think you need to PM Jimbob with this info (assuming he hasn't been around in the forum for a while - he's just become a parent for the 1st time, so he hasn't much time for forum visiting. :-) )

As for new stuff, Jimbob tells me that some stuff he's working on now.

Not sure what, although he did mention a kernel upgrade...

regards

Tim
Any advice I may give is given in good faith but may be incorrect so listen to what other people have to say as well and don't blame me when it all goes tits up.
TimSB
Forum Admin
 
Posts: 561
Joined: Fri Jun 16, 2006 12:30 pm
Location: north of London, UK

Postby KlausGimm on Thu Feb 14, 2008 10:50 am

Hi Tim

I mailed Jimbob and he asked for some output. I copy it in here as well so people can make references to their own installation and compare if nessesary.

Code: Select all

[root@cobalt sbin]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [raid4] [raid6]
md3 : active raid1 hde3[0] hdg3[1]
      1003968 blocks [2/2] [UU]

md8 : active raid0 md6[0] md7[1]
      481363264 blocks 64k chunks

md1 : active raid1 hdg1[1] hde1[0]
      2008000 blocks [2/2] [UU]

md2 : active raid1 hdg2[1] hde2[0]
      2008000 blocks [2/2] [UU]

md5 : active raid1 hdg5[1] hde5[0]
      2008000 blocks [2/2] [UU]

md6 : active raid1 hdg6[1] hde6[0]
      237167488 blocks [2/2] [UU]

md7 : active raid1 hdk1[1] hdi1[0]
      244195904 blocks [2/2] [UU]

unused devices: <none>
[root@cobalt sbin]#




Jim did not ask explicitly for the following information, how ever i squeezed it into the mail as well. Copy and Paste for the win :o)

Code: Select all
[root@cobalt sbin]# ./fdisk -l

Disk /dev/hde: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hde1               1         250     2008093+  fd  Linux raid autodetect
/dev/hde2             251         500     2008125   fd  Linux raid autodetect
/dev/hde3             501         625     1004062+  82  Linux swap
/dev/hde4             626       30401   239175720    5  Extended
/dev/hde5             626         875     2008093+  fd  Linux raid autodetect
/dev/hde6             876       30401   237167563+  fd  Linux raid autodetect

Disk /dev/hdg: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hdg1               1         250     2008093+  fd  Linux raid autodetect
/dev/hdg2             251         500     2008125   fd  Linux raid autodetect
/dev/hdg3             501         625     1004062+  82  Linux swap
/dev/hdg4             626       30401   239175720    5  Extended
/dev/hdg5             626         875     2008093+  fd  Linux raid autodetect
/dev/hdg6             876       30401   237167563+  fd  Linux raid autodetect

Disk /dev/hdi: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hdi1               1       30401   244196001   fd  Linux raid autodetect

Disk /dev/hdk: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hdk1               1       30401   244196001   fd  Linux raid autodetect

Disk /dev/md7: 250.0 GB, 250056605696 bytes
2 heads, 4 sectors/track, 61048976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md7 doesn't contain a valid partition table

Disk /dev/md6: 242.8 GB, 242859507712 bytes
2 heads, 4 sectors/track, 59291872 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md6 doesn't contain a valid partition table

Disk /dev/md5: 2056 MB, 2056192000 bytes
2 heads, 4 sectors/track, 502000 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md5 doesn't contain a valid partition table

Disk /dev/md2: 2056 MB, 2056192000 bytes
2 heads, 4 sectors/track, 502000 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md1: 2056 MB, 2056192000 bytes
2 heads, 4 sectors/track, 502000 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/sda: 259 MB, 259522560 bytes
65 heads, 32 sectors/track, 243 cylinders
Units = cylinders of 2080 * 512 = 1064960 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         244      253424    6  FAT16
Partition 1 has different physical/logical endings:
     phys=(249, 64, 32) logical=(243, 44, 32)

Disk /dev/md8: 492.9 GB, 492915982336 bytes
2 heads, 4 sectors/track, 120340816 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md8 doesn't contain a valid partition table

Disk /dev/md3: 1028 MB, 1028063232 bytes
2 heads, 4 sectors/track, 250992 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md3 doesn't contain a valid partition table
[root@cobalt sbin]#



Going to keep this topic updates as the talk with Jim moves along.


Best regards

Klaus
Symantec Velociraptor 1300 - 2.10.3 ext3
Strongbolt (1.06.03) - rock stable Raid 10
4x Samsung SP2514N
Single CPU 1GHz
(Dual CPU leads to memory errors)
KlausGimm
Legendary Forum member
 
Posts: 263
Joined: Fri Oct 27, 2006 3:25 pm
Location: Germany

Postby KlausGimm on Thu Feb 14, 2008 7:43 pm

Hey folks.

I just got the answer from Jim.



As I suspected, there is nothing wrong with the RAID. This is the active monitor incorrectly reporting the RAID status.

fdisk -l shows that all physical drives are ok.
fdisk -l is also reporting RAID drives as not having valid partition tables. This should be ignored. Fdisk is not there to check virtual partitions.

cat /proc/mdstat is showing that all RAID partitions are complete.





So its just an active monitor problem. I hope Jim will provide a fix for it when he finds the time as it makes it difficult to recognize a real fail.


Best regards

Klaus
Symantec Velociraptor 1300 - 2.10.3 ext3
Strongbolt (1.06.03) - rock stable Raid 10
4x Samsung SP2514N
Single CPU 1GHz
(Dual CPU leads to memory errors)
KlausGimm
Legendary Forum member
 
Posts: 263
Joined: Fri Oct 27, 2006 3:25 pm
Location: Germany

Postby KlausGimm on Wed Feb 20, 2008 6:38 pm

Aloah !

I got the solution pinged from Jim. Here is the steps he mailed me to do.

Code: Select all

> Please log onto the Cobalt and do:
> mv /usr/sausalito/perl/Cobalt/RAID.pm
> /usr/sausalito/perl/Cobalt/RAID.pm.old
>
> wget http://www.osoffice.co.uk/linux/RAID.pm -O
> /usr/sausalito/perl/Cobalt/RAID.pm
>
> /sbin/service cced.init restart
>
> That should sort it out.



Thanks alot Jim ! It works just fine :o)



Regards

Klaus
Symantec Velociraptor 1300 - 2.10.3 ext3
Strongbolt (1.06.03) - rock stable Raid 10
4x Samsung SP2514N
Single CPU 1GHz
(Dual CPU leads to memory errors)
KlausGimm
Legendary Forum member
 
Posts: 263
Joined: Fri Oct 27, 2006 3:25 pm
Location: Germany

Re: XTR Raid 10 install with 1.6.3 SB CD [Failed raid ??]

Postby jkolter on Mon Jan 18, 2010 4:28 pm

KlausGimm wrote:Aloah !



I got the solution pinged from Jim. Here is the steps he mailed me to do.



Code: Select all

> Please log onto the Cobalt and do:
> mv /usr/sausalito/perl/Cobalt/RAID.pm
> /usr/sausalito/perl/Cobalt/RAID.pm.old
>
> wget http://www.osoffice.co.uk/linux/RAID.pm -O
> /usr/sausalito/perl/Cobalt/RAID.pm
>
> /sbin/service cced.init restart
>
> That should sort it out.





Thanks alot Jim ! It works just fine :o)







Regards



Klaus


Did not work for me. I had to modify this section of the file:

# these mothers are alive
if ($piece =~ /^(md(\d*))/) {
$dev = '/dev/' . $1;
push @$alive_drives, $dev;
next;
}

I had to add the [a-z] after the md. See final below:

# these mothers are alive
if ($piece =~ /^(md[a-z](\d*))/) {
$dev = '/dev/' . $1;
push @$alive_drives, $dev;
next;
}

Now the GUI output reads:

Redundant Array of Independent Disks (RAID) Status Details

Current Status
Status not available Your system is configured for disk mirroring (RAID 1) using 2 disks.

Status Last Changed
January 18 2010 11:12 AM

Drive Status Details

Current Status
Normal All drives are functioning normally.

Status Last Changed
January 18 2010 11:12 AM
jkolter
Forum member
 
Posts: 7
Joined: Mon Feb 02, 2009 5:49 am


Return to Strongbolt on RaQ XTR hardware

Who is online

Users browsing this forum: No registered users and 1 guest

cron