modern servers nvme hot-swappable?

When I see a dedi advertised with, say 4-6 nvme drives, are the drives likely to be individually hot swap-able1 or are they buried inside the chassis and attached to the mobo like desktop pc's?

Ask the provider, I know.....but I'd like to save time and know my chances beforehand as asking too much feel like trolling them.

Am I better off going the colo route for this requirement? I'll do it if I have to but I prefer the dedi route if possible.

1)https://img.youtube.com/vi/5sauNryOx...resdefault.jpg
(don't need that many drives)


Similar Content



Best method to connect 24 SSDs for software RAID?

Hello,

I have been using 3108 RAID controllers with supermicro servers for a long time.
It works fine.

However, due to the increase in SSD performance, I was always aware that software RAID will probably be better in performance along with some filesystems such as ZFS having additional unique advantages.
So I am finally considering making the leap from hardware RAID to software RAID, especially since NVME drives from what I know don't even have hardware RAID support.

In the past when connecting many drives I noticed I have to setup each signle drive as a RAID 0 through the RAID card, which I think would decrease performance.

So I have to questions regarding best method to connect 24 SSDs for software RAID?

1.
What is the standard way of connecting 24 SATA SSDs to a storage server and use each drive individually and/or setup as software raid?
From what I know a RAID card such as 3108 is usually needed to connect so many drives to the mainboard.

2.
How does 24 nvme drives work in Storage servers that support it? Does mainboard have shared PCIe lanes that are connected with the nvme drives?

Thank you.

Looking to get same deal we have now at another DC (30 Gbps and many Nvme SSD drives)

We currently have:

3 servers in EU

specs for each:
CPU: 2x Xeon Silver 4114 CPU
RAM: 120 GB ram
10 Gbps unmetered dedicated (per server , total 30 Gbps)
22 x 2TB SSD PCI NVME - RAID 0

current cost - 4500$ per month for all 3.

We are looking for the exact same setup at another DC in Western Europe, DE,NL,UK would be ideal.
can compromise on CPU / RAM if it will help reaching this price, as its less relevant for our usage.

Thanks.

Home Server Question

So I've run into a little issue with a build I'm working on and I have a few questions. I'm going to be working with a proxmox cluster running NFS storage for nexcloud instances because its a personal home server ill be using the mobo Tuf-Gaming B550-Plus with the AMD Ryzen 5 3600 processor.

I'm going to install proxmox on a NVMe SSD drive and as well add 6x4TB SAS drives for the storage.

Now my question is the mobo has an onboard raid card but after reading a few guides I see a lot of them saying don't use raid arrays use the NFS settings in proxmox to combine the storage any other options available for storage with proxmox?

Would this be possible without a raid card? If not which raid card do I need? Would onboard raid support 4TB SAS drives? I've been scouring google for the last couple days but not coming up with anything significant that would help me finish off the build.

Any advice would be much appreciated.

Thanks.

Looking for replacement of WebNX Storage server and servers with iGPU

Webnx has announce their servers wont be back for weeks. We dont have weeks.

Looking for these in the West Coast USA.

500TB storage server/s, nvme drives for cache, 1gbps (10gbps for a month if possible) unmetered incoming, 40gbps private networking (aggregate if on multiple storage servers)
15 x server with iGPU, 32gb Ram, 1tb ssd/nvme, 1gbps private network, 1gbps public with at least 40TB outgoing.

We require free private networking for all servers.
Looking for something close to the pricing of webnx.

<<snipped>>

Thanks.

Looking for some suggestions regarding CPU selection

I am planning to upgrade my current, very old server (Dual E5-2430 Xeons, 12 cores/24 HT with SATA SSDs) which has been doing duty as a shared hosting server for around 400 websites. Have noticed slow website loading times. 95% sites are using PHP/MySQL(Wordpress).

From the time I got the server and now, so many new CPUs have come out and looks like the xeon e3/e5 lines have been discontinued. Dual scalable silver/gold xeon servers are out of my budget, so the only alternative is a E-2288G or a W-1290P (more interested in this one) xeon with NVMe drives. Passmark scores for these servers are much higher than my current one, but moving to these new servers will mean a downgrade in the number of cores.

Looking for some personal experiences with these E-2288G/W-1290P CPUs, so that I am not actually getting a downgrade by changing servers.

I don't need more than 64GB of RAM and a 2x2TB NVMe SSD is fine.

CEPH Build - EPYC NVME Power Consumption

Hi All -

Wanted to see people's experience with Ceph builds (3/5 nodes) using an AMD Epyc2 with NVME drives. One of my questions is actual power consumption; trying to understand what our electricity costs will be like for the year . I am looking at this for our potential config with 3 nodes:

1x Supermicro A+ Server 1114S-WN10RT
1x AMD EPYC 7302P, 3.00GHz, 16C/32T, Socket SP3, tray
4x 32GB SK hynix DDR4-3200 CL22 (2Gx4) ECC reg. DR
1x MON: 256GB Samsung SSD PM981a NVMe, M.2 (PCIe)
10x OSD: 7,68TB Samsung SSD PM1733, 2.5 Zoll, U.2 PCIe 4.0 x4, NVMe
1x Mellanox MCX414A-BCAT ConnectX-4 EN NIC
- 40/56GbE dual-port QSFP+ PCIe3.0 x8

How much I/O should I get from this options?

So I'm looking forward to buy something like the Infra-3 server from OVH (https://www.ovhcloud.com/es/bare-metal/infra/infra-3/). They offer a 4x60GB SSD Sata hard Raid, and also a 3x3.84TB SSD NvMe Soft Raid. And althought I want more space, I think they sold the 4x960 so people can wain more speed? How much more speed could I win from that configuration?

Also they sold a 3x6TB HHD SATA Hard RAID. I want to know how much speed on I/O should I get in every case.

Currently I got a Infra-2 (https://www.ovhcloud.com/es/bare-metal/infra/infra-2/) with 2x960GB SSD NvMe SoftRaid + 2 x 6TB HDD Sata Soft Raid, How can I test the speed of I/O of those hard drives?

And If I start a new server, what kind of partitioning do you recommend to improve the I/O?

need HPC 256G RAM + 2 x 1T+ NVME servers

I want some servers with large memory and at least two 1 TB nvme disk near East Coast. or Netherland

Can this dedicated server handle bigbluebutton?

Hello, I am looking for a good server to run bigbluebutton on.
There will be max 400 users active at the same time.

I was thinking of these dedicated servers:

Dedi-1:

- CPU: 2x Intel Xeon E5-2690 (or E5-2690v2)
- 64GB 10600R ECC Registered 4x 16GB
- Raid controller Dell PERC H310
- 1Gbit/s up and down
- 2x 1TB SSD


Dedi-2:

- CPU: 2x Intel Xeon E5-2690v4
- 64GB DDR4
- Raid controller Dell PERC H310
- 1Gbit/s up and down
- 2x 1TB SSD

Can the Dedi-1 server with the E5-2690 handle this?
If you have a other suggestion please let me know. But the CPU needs to be from the Xeon E5 Family.

High end hardware in north america

I'm wondering what kind of price range I'll be looking at for a high end server in North America (Canada is OK), preferably around east coast.

Let's say hardware I need is:
- many cores EPYC cpu
- 128-256G ram
- 4x~4TB nvme
- 2x10G connection (for burst, not sustained traffic), <100TB traffic
- redundant PSU
- many 9's uptime
- single tenant, 1 ipv4 + 1 ipv6

The servers would be worker node for a saas so I don't need the lowest latency to major markets. DC's in an empty desert, tundra, or in the middle of corn fields are OK

Is 500-600$ per month possible for this setup?

I'm actually OK with the cost structure similar to colocating (large $$$$ upfront, small $ recurring) if I trust the provider's reputation enough. But the DIY nature of colo when it comes to h/w replacement and networking is not for me. I actually tried colocating using a low-end hardware a short while back to test the waters, ended up leaving said hardware with the colo provider (after paying my bills of course).

Just a chit-chat thread, not gonna commit to anything until the next couple of months.