need HPC 256G RAM + 2 x 1T+ NVME servers

I want some servers with large memory and at least two 1 TB nvme disk near East Coast. or Netherland


Similar Content



Looking for replacement of WebNX Storage server and servers with iGPU

Webnx has announce their servers wont be back for weeks. We dont have weeks.

Looking for these in the West Coast USA.

500TB storage server/s, nvme drives for cache, 1gbps (10gbps for a month if possible) unmetered incoming, 40gbps private networking (aggregate if on multiple storage servers)
15 x server with iGPU, 32gb Ram, 1tb ssd/nvme, 1gbps private network, 1gbps public with at least 40TB outgoing.

We require free private networking for all servers.
Looking for something close to the pricing of webnx.

<<snipped>>

Thanks.

High end hardware in north america

I'm wondering what kind of price range I'll be looking at for a high end server in North America (Canada is OK), preferably around east coast.

Let's say hardware I need is:
- many cores EPYC cpu
- 128-256G ram
- 4x~4TB nvme
- 2x10G connection (for burst, not sustained traffic), <100TB traffic
- redundant PSU
- many 9's uptime
- single tenant, 1 ipv4 + 1 ipv6

The servers would be worker node for a saas so I don't need the lowest latency to major markets. DC's in an empty desert, tundra, or in the middle of corn fields are OK

Is 500-600$ per month possible for this setup?

I'm actually OK with the cost structure similar to colocating (large $$$$ upfront, small $ recurring) if I trust the provider's reputation enough. But the DIY nature of colo when it comes to h/w replacement and networking is not for me. I actually tried colocating using a low-end hardware a short while back to test the waters, ended up leaving said hardware with the colo provider (after paying my bills of course).

Just a chit-chat thread, not gonna commit to anything until the next couple of months.

Looking for some suggestions regarding CPU selection

I am planning to upgrade my current, very old server (Dual E5-2430 Xeons, 12 cores/24 HT with SATA SSDs) which has been doing duty as a shared hosting server for around 400 websites. Have noticed slow website loading times. 95% sites are using PHP/MySQL(Wordpress).

From the time I got the server and now, so many new CPUs have come out and looks like the xeon e3/e5 lines have been discontinued. Dual scalable silver/gold xeon servers are out of my budget, so the only alternative is a E-2288G or a W-1290P (more interested in this one) xeon with NVMe drives. Passmark scores for these servers are much higher than my current one, but moving to these new servers will mean a downgrade in the number of cores.

Looking for some personal experiences with these E-2288G/W-1290P CPUs, so that I am not actually getting a downgrade by changing servers.

I don't need more than 64GB of RAM and a 2x2TB NVMe SSD is fine.

Looking to get same deal we have now at another DC (30 Gbps and many Nvme SSD drives)

We currently have:

3 servers in EU

specs for each:
CPU: 2x Xeon Silver 4114 CPU
RAM: 120 GB ram
10 Gbps unmetered dedicated (per server , total 30 Gbps)
22 x 2TB SSD PCI NVME - RAID 0

current cost - 4500$ per month for all 3.

We are looking for the exact same setup at another DC in Western Europe, DE,NL,UK would be ideal.
can compromise on CPU / RAM if it will help reaching this price, as its less relevant for our usage.

Thanks.

Cheap storage servers

I am looking to rent 4x large storage servers with around 120tb of hdds in each.
We would also need two fairly powerful dedicated servers one at 10 gigabit with 32gb memory and 256gb ssd and around 200tb external bandwidth and another with around 50tb at gigabit, 32gb memory, 256gb ssd and a 2tb hdd. We would need a internal network connecting all these systems together ideally this would run at 10gbps or above. We would likely be looking to upgrade our external bandwidth over time.

Not bothered about location looking for long term relationship with provider as we will likely buy more of these servers as we expand.
Please bare in mind with a quote that we are running on quite a tight budget.

looking for - Massive storage servers - 800 TB

Looking for dedis with:

1.
Gold 6130 (or similar)
64GB DDR4
JBOD with 66x12TB SAS
2xLSI MegaRaid 9480-8i8e with BBU
HW Raid10,
the usable disk space will be 396TB.
4Gbps dedicated unmetered burstable to 20Gbps
Netherlands (other places in Europe could also be fine if latency test will show good results)

AND

2.

2 x Xeon gold 6133 (or similar)
64 GB Ram
20 x 2TB NVME SSD
HW RAID 0
10 Gbps unmetered

and / or

2 x Xeon gold 6133 (or similar)
64 GB Ram
14 x 4TB NVME SSD
HW RAID 0
10 Gbps unmetered

in NL

modern servers nvme hot-swappable?

When I see a dedi advertised with, say 4-6 nvme drives, are the drives likely to be individually hot swap-able1 or are they buried inside the chassis and attached to the mobo like desktop pc's?

Ask the provider, I know.....but I'd like to save time and know my chances beforehand as asking too much feel like trolling them.

Am I better off going the colo route for this requirement? I'll do it if I have to but I prefer the dedi route if possible.

1)https://img.youtube.com/vi/5sauNryOx...resdefault.jpg
(don't need that many drives)

Best method to connect 24 SSDs for software RAID?

Hello,

I have been using 3108 RAID controllers with supermicro servers for a long time.
It works fine.

However, due to the increase in SSD performance, I was always aware that software RAID will probably be better in performance along with some filesystems such as ZFS having additional unique advantages.
So I am finally considering making the leap from hardware RAID to software RAID, especially since NVME drives from what I know don't even have hardware RAID support.

In the past when connecting many drives I noticed I have to setup each signle drive as a RAID 0 through the RAID card, which I think would decrease performance.

So I have to questions regarding best method to connect 24 SSDs for software RAID?

1.
What is the standard way of connecting 24 SATA SSDs to a storage server and use each drive individually and/or setup as software raid?
From what I know a RAID card such as 3108 is usually needed to connect so many drives to the mainboard.

2.
How does 24 nvme drives work in Storage servers that support it? Does mainboard have shared PCIe lanes that are connected with the nvme drives?

Thank you.

Looking for USA Dedicated Server

Hey all,

I'm looking for suggestions for a new dedicated servers based East Coast USA, all my servers are European based so I don't know any good US based server companies. I'm in need of a server with

Atleast a 6c/12t CPU /Intel Xeon etc
Between 64GB & 128GB RAM
1 Gbps unmetered bandwidth
4TB Storeage (either 1x4TG or 2x2TB merged)
Ubuntu 18.04 OS (custom software for clients demands 18.04)

So suggestions welcome, but please no OVH or Hetzner suggestions as I don't use them, had bad experiences with both.

Cheers

CEPH Build - EPYC NVME Power Consumption

Hi All -

Wanted to see people's experience with Ceph builds (3/5 nodes) using an AMD Epyc2 with NVME drives. One of my questions is actual power consumption; trying to understand what our electricity costs will be like for the year . I am looking at this for our potential config with 3 nodes:

1x Supermicro A+ Server 1114S-WN10RT
1x AMD EPYC 7302P, 3.00GHz, 16C/32T, Socket SP3, tray
4x 32GB SK hynix DDR4-3200 CL22 (2Gx4) ECC reg. DR
1x MON: 256GB Samsung SSD PM981a NVMe, M.2 (PCIe)
10x OSD: 7,68TB Samsung SSD PM1733, 2.5 Zoll, U.2 PCIe 4.0 x4, NVMe
1x Mellanox MCX414A-BCAT ConnectX-4 EN NIC
- 40/56GbE dual-port QSFP+ PCIe3.0 x8