Seedbank: Difference between revisions
No edit summary |
No edit summary |
||
Line 45: | Line 45: | ||
* Firewall | * Firewall | ||
** Disable all incoming connections, except LAN to port 22. | ** Disable all incoming connections, except LAN to port 22. | ||
=== ZFS === | |||
=== iSCSI === | |||
=== Formatting the Partition === | |||
THEN once you have gone through ALL THAT SHIT you need to format the thing you just made | |||
First make a partition table with [[parted]] | |||
<pre> | |||
>>> sudo parted /dev/sdd | |||
GNU Parted 3.5 | |||
Using /dev/sdd | |||
Welcome to GNU Parted! Type 'help' to view a list of commands. | |||
>>> (parted) mklabel gpt | |||
>>> (parted) print | |||
Model: IET VIRTUAL-DISK (scsi) | |||
Disk /dev/sdd: 275TB | |||
Sector size (logical/physical): 512B/512B | |||
Partition Table: gpt | |||
Disk Flags: | |||
Number Start End Size File system Name Flags | |||
>>> (parted) mkpart primary ext4 0% 100% | |||
>>> (parted) print | |||
Model: IET VIRTUAL-DISK (scsi) | |||
Disk /dev/sdd: 275TB | |||
Sector size (logical/physical): 512B/512B | |||
Partition Table: gpt | |||
Disk Flags: | |||
Number Start End Size File system Name Flags | |||
1 1049kB 275TB 275TB ext4 primary | |||
(parted) quit | |||
</pre> | |||
and then FORMAT THAT BAD BOY | |||
<syntaxhighlight lang="bash"> | |||
sudo mkfs.ext4 /dev/sdd1 | |||
</syntaxhighlight> | |||
Get the ID by doing <code>ls -l /dev/disk/by-id</code> and find the one POINTING TO YOUR PARTITION | |||
<syntaxhighlight lang="bash"> | |||
>>> ls -l /dev/disk/by-id | |||
lrwxrwxrwx 1 root root 9 Oct 23 22:06 scsi-360000000000000000e00000000010001 -> ../../sdd | |||
lrwxrwxrwx 1 root root 10 Oct 23 22:06 scsi-360000000000000000e00000000010001-part1 -> ../../sdd1 | |||
</syntaxhighlight> | |||
and ADD IT TO <code>/etc/fstab</code>. | |||
<pre> | |||
ID=scsi-360000000000000000e00000000010001-part1 /mnt/seedbank/p2p/dandi ext4 _netdev 0 0 | |||
</pre> | |||
Line 59: | Line 121: | ||
* https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/ | * https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/ | ||
* https://github.com/mrlesmithjr/ansible-zfs/blob/master/tasks/manage_zfs.yml Example of using ZFS ansible commands | * https://github.com/mrlesmithjr/ansible-zfs/blob/master/tasks/manage_zfs.yml Example of using ZFS ansible commands | ||
* https://linuxhint.com/share-zfs-volumes-via-iscsi/ - actual guide on how to do it | |||
* https://wiki.debian.org/SAN/iSCSI/open-iscsi | |||
== Reference == | == Reference == |
Revision as of 22:21, 23 October 2023
Or, more specifically, the Storinator Q30 enhanced
Hardware
- 30 16.38TB hard drives (1 used for OS, so 29)
Web Administration (IPMI)
There are two subsystems on the NAS, the IPMI system and the main operating system. The IPMI system can be used to configure the system before an OS is present and manage other administration tasks.
- Log into the web console through its IP (currently 192.168.1.28), but check the DHCP server
- The default creds are
- Username: ADMIN
- Password: (on side of server)
Install Debian
See: https://knowledgebase.45drives.com/kb/kb450289-ubuntu-20-04-redundant-os-installation/
- Get yourself a copy of debian
- Specifically, a full installation image
- Open the IPMI control panel (see above)
- Launch a virtual console either with the HTML5 or Java plugin
- To use java, you'll need openjdk, and since apparently this kind of java file has been deprecated you'll also need openwebstart
- Then open the
launch.jnlp
file with openwebstart (not sure how to do via CLI, right click and "open with...") - It seems like HTML5 can do everything the java version does without needing all that java shit, so might as well use that?
- The power of the "server" is different than the IPMI subsystem, so you might need to turn on the server on the Remote Control -> Power Control menu
- Wait this thing comes with Ubuntu installed... nvm for now
Config
See the ansible configuration for the seedbank
host.
Security
- Users
- Root password changed
- User password changed
- Made user
jonny
that is in sudoers
- SSH
- Root access disabled
- Password access disabled
- Firewall
- Disable all incoming connections, except LAN to port 22.
ZFS
iSCSI
Formatting the Partition
THEN once you have gone through ALL THAT SHIT you need to format the thing you just made
First make a partition table with parted
>>> sudo parted /dev/sdd GNU Parted 3.5 Using /dev/sdd Welcome to GNU Parted! Type 'help' to view a list of commands. >>> (parted) mklabel gpt >>> (parted) print Model: IET VIRTUAL-DISK (scsi) Disk /dev/sdd: 275TB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags >>> (parted) mkpart primary ext4 0% 100% >>> (parted) print Model: IET VIRTUAL-DISK (scsi) Disk /dev/sdd: 275TB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 275TB 275TB ext4 primary (parted) quit
and then FORMAT THAT BAD BOY
sudo mkfs.ext4 /dev/sdd1
Get the ID by doing ls -l /dev/disk/by-id
and find the one POINTING TO YOUR PARTITION
>>> ls -l /dev/disk/by-id
lrwxrwxrwx 1 root root 9 Oct 23 22:06 scsi-360000000000000000e00000000010001 -> ../../sdd
lrwxrwxrwx 1 root root 10 Oct 23 22:06 scsi-360000000000000000e00000000010001-part1 -> ../../sdd1
and ADD IT TO /etc/fstab
.
ID=scsi-360000000000000000e00000000010001-part1 /mnt/seedbank/p2p/dandi ext4 _netdev 0 0
Documentation
- Initial Cable Setup - note that the IPMI cable is separate from the main internet cable.
- Setting up Remote Access
- Mounting Virtual Media
- Installing Ubuntu
- RAID and RAIDZ - info on ZFS and RAIDZ
- https://wiki.archlinux.org/title/ZFS/Virtual_disks - Archwiki on ZFS
- https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/
- https://github.com/mrlesmithjr/ansible-zfs/blob/master/tasks/manage_zfs.yml Example of using ZFS ansible commands
- https://linuxhint.com/share-zfs-volumes-via-iscsi/ - actual guide on how to do it
- https://wiki.debian.org/SAN/iSCSI/open-iscsi
Reference
Quotations
Daniel says that the 45drives ppl said this when ordering:
I spoke with our Architect, and the Storinator Q30 configured with 2 vdevs of 15 HDDs in RAIDZ2 does have the capability to saturate a 10Gb network. I would recommend adding more resiliency by going with 3 vdevs of 10 HDDs in RAIDZ2. It will still be able to saturate a 10Gb network but will add more fault tolerance and faster resilvering times.
we shall figure out what that means...