Need help with java crashing with SIGSEGV error

I am pretty inexperienced with linux and running a server but wanted to use an old pc to host a minecraft server for a friend and I. I got mineos turnkey jessie installed running a 1.14.4 papermc server and for the most part it works in my limited testing so far.

The only problem is that it seems every other day java seems to crash randomly. Like I mean literally every other day. One day it would go without any crashes and I’ll think the problem is fixed, then the next day it will constantly crash. Sometimes just starting the server will cause a crash or when I first log in. The day before it was running without any crashes at all then a little after midnight it crashed. The next day it has crashed randomly. The java error i always get is SIGSEGV. Here is a link to the lastest hs_err_pid. https://pastebin.com/a5nTKXGa

I’ve tried using Aikar’s G1GC flags and without with the same results. Also I’ve tried xmx and xms to the same values and also starting xms at 256 to see if it made a difference and it didn’t. And I’m currently using 192 version of paper, if that makes a difference, but I’ve also used previous versions with the same result.

I only have 4gbs of memory on the pc (it’s an old core2duo) but I’ve left plenty of memory for the os. I’ve also tried linux lite and it crashed on that as well. I’ve done a memtest and it passed without any errors.

I don’t use any plugins and it’s crashed with a new world and our current world that we are playing on. When I was using linux lite it was using openjdk 11 if I remember correctly, now I’m just using the openjdk 8 that came installed. Don’t really know what else to try. Any help or a point in the right direction would be greatly appreciated. Thanks!

A SIGSEGV is an error(signal) caused by an invalid memory reference or a segmentation fault. You are probably trying to access an array element out of bounds or trying to use too much memory.

With it being a Java error (and also about not enough memory), you’re more likely to be better off with more allocated to Paper than less.

That said, how much are you planning on allocating to your server in the first place? Since it is likely going to be in the GBs rather than around 256 MB, you’re hindering your server by starting XMS low. When Paper needs more memory, it has to do a garbage collection then it has to reallocate contiguous memory.

If you want Minecraft to have 3 GB, don’t give it 256 and then force GCs to get more–just give it 3GB from the start.

I originally had 2256mb for both xmx and xms. I only changed the xms to 256 to try to see if it would help the crashing somehow. While using Aikar’s flags and 2256, the webui dashboard says there’s only about 500mbs free so I figured leaving around that much would be safer. While taking off the flags and changing the xms to 256, the server was only using about a gig and a half with both my friend and I on.

Most of the crashes happen right when I try to start the server, usually crashing before you can join the server or joining after it’s up. I’ve only had a few crashes after the server’s been on for awhile. I’m starting to worry that this pc is just too old? It’s a shame because it’s so much smoother compared to the free server we are currently playing on.

Based on the error, I’d don’t think that the issue is neither your system, nor the webui, but instead the actual build–or the Java flags. You should try a newer build if possible and you should try fewer Java options.

XMX and XMS differences

I really encourage you to never set XMS lower than XMX. Or at least never so extremely far.

XMS is the starting memory, which is far less than what Paper ultimately requests. If I’m reading your logs correctly–and that these logs came from a server started within 55 seconds and crashed–then one particular (though not fully explaining issue) is how many garbage collections you’re forcing with a small XMS.

Garbage collections are expensive, cpu-wise, I/O wise and memory-constrained.

GC Heap History (10 events):
Event: 41.715 GC heap before
{Heap before GC invocations=43 (full 2):
 PSYoungGen      total 741376K, used 226783K [0x0000000791000000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 719872K, 28% used [0x0000000791000000,0x000000079d9f7d38,0x00000007bcf00000)
  from space 21504K, 92% used [0x00000007beb00000,0x00000007bfe80000,0x00000007c0000000)
  to   space 25088K, 0% used [0x00000007bcf00000,0x00000007bcf00000,0x00000007be780000)
 ParOldGen       total 276992K, used 116806K [0x0000000733000000, 0x0000000743e80000, 0x0000000791000000)
  object space 276992K, 42% used [0x0000000733000000,0x000000073a211908,0x0000000743e80000)
 Metaspace       used 54187K, capacity 58969K, committed 59028K, reserved 1099776K
  class space    used 7708K, capacity 8917K, committed 8920K, reserved 1048576K
Event: 41.916 GC heap after
Heap after GC invocations=43 (full 2):
}
Event: 41.916 GC heap before
{Heap before GC invocations=44 (full 3):

Event: 43.560 GC heap after
...
Event: 50.260 GC heap before
Event: 50.260 GC heap before
...
Event: 50.545 GC heap after
...
Event: 51.930 GC heap before
...
Event: 51.996 GC heap after
...
Event: 54.873 GC heap before
...
Event: 54.944 GC heap after

In this heap history you can see that there was a collection at 41.715, 41.916, 43.560, 50.260, 51.930, 51.996, 54.873, 54.944…

Then crash.

Basically, in the timeframe of like 60 seconds, you’re practically doing nothing but garbage collecting, which I’m sure you can guess is a bad sign. Remember, GC goes like this:

1: heap full (or one of the sub-allocations, younggen/oldgen)
2: scan memory to see what is old, and being often reused
3: decide to promote or not promote
4: detect memory which is old and not being reused
5: release lowly-reused memory, compact memory (compacting not relevant in G1GC)
6: find contiguous memory to allocate to heap

Java tweaks are bad

That said, I vehemently disagree with the tweaks here, and quite frankly, I’d blame it for all this unexpected java behavior.

java -Xms6G -Xmx6G -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:MaxGCPauseMillis=100 -XX:+DisableExplicitGC -XX:TargetSurvivorRatio=90 -XX:G1NewSizePercent=50 -XX:G1MaxNewSizePercent=80 -XX:G1MixedGCLiveThresholdPercent=35 -XX:+AlwaysPreTouch -XX:+ParallelRefProcEnabled -Dusing.aikars.flags=mcflags.emc.gs -jar paperclip.jar

It is, in my opinion, one of the most self-destructive tweaks I’ve ever seen:

Let’s examine this more closely.

If the application being fine-tuned has a relatively consistent object allocation rate, it is acceptable to raise the target survivor occupancy to something as high as -XX:TargetSurvivorRatio=80 or -XX:TargetSurvivorRatio=90. The advantage of being able to do so helps reduce the amount of survivor space needed to age objects. The challenge with setting -XX:TargetSurvivorRatio= higher is the HotSpot VM not being able to better adapt object aging in the presence of spikes in object allocation rates, which can lead to tenuring objects sooner than you would like.

Spikes would be exactly what you’re experiencing at any initial load time.

Here’s what it means to tenure early:

41.960  ParOldGen       total 276992K, used 174838K
43.560  ParOldGen       total 453120K, used 180836K

In the course of less than two seconds, the old gen was raised in size from 276MB to 453MB all to accommodate … 180MB in actual memory. Just like the young generation, the old generation is getting sized up hugely but it’s mostly not even getting used!

Old gen is good for reused memory, but we don’t know that that is applicable because we’re seeing things get tenured into oldgen sooner than we would like.

So this tweak has set Survivor ratio at 90%, which means that the young generation survive the memory reaping (deciding if it will live or die) more often. More survival successes means quicker promotion to the old gen (which is exactly what you’re seeing here). If things are unduly promoted to the oldgen, they don’t get cleared out/checked for relevance as often–and oldgen GCs are far more expensive, even!

So let’s follow your error output. Look closely at the eden space, where new fledgling memory allocations are grown: 28%, but the survivor space is already 92% full. This means that Java is running out of survivor space, because it’s saving more survivors, but the survivor space is tiny compared to what is available. Here’s an ascii visual of EDEN vs. survivor areas.

OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOFFTT

Each O is available memory to newly created objects. Each F and T indicate where memory designated as “important enough to keep” is kept (“from” and “to”). Exacerbated by a low XMS which forces judgment on survivors, F and T will fill up at an accelerated pace becaue of TargetSurvivorRatio.

So you have huge numbers in the young gen (741376K or 741MB “eden”) but basically you’re only able to use about 21504K (21MB “from”) before it is worried that it is running low on space. So rather than run out of survivor space, it promotes them to Oldgen space.

Notice how the From space continuously fills up, forcing garbage collections every few seconds? This is made worse by the target MaxGCPauseMillis=100 which says “try to keep your GC’s at 100ms please”. So now you have GC collections sometimes every 200ms, sometimes every other second. In short, ridiculously more often than there should be.

GC Heap History (10 events):
Event: 41.715 GC heap before
{Heap before GC invocations=43 (full 2):
 PSYoungGen      total 741376K, used 226783K [0x0000000791000000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 719872K, 28% used [0x0000000791000000,0x000000079d9f7d38,0x00000007bcf00000)
  from space 21504K, 92% used [0x00000007beb00000,0x00000007bfe80000,0x00000007c0000000)
  to   space 25088K, 0% used [0x00000007bcf00000,0x00000007bcf00000,0x00000007be780000)
 ParOldGen       total 276992K, used 116806K [0x0000000733000000, 0x0000000743e80000, 0x0000000791000000)

Event: 41.916 GC heap after
Heap after GC invocations=43 (full 2):
 PSYoungGen      total 744960K, used 25064K [0x0000000791000000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 719872K, 0% used [0x0000000791000000,0x0000000791000000,0x00000007bcf00000)
  from space 25088K, 99% used [0x00000007bcf00000,0x00000007be77a160,0x00000007be780000)
  to   space 25088K, 0% used [0x00000007be780000,0x00000007be780000,0x00000007c0000000)
 ParOldGen       total 276992K, used 174838K [0x0000000733000000, 0x0000000743e80000, 
Event: 41.916 GC heap before
{Heap before GC invocations=44 (full 3):
 PSYoungGen      total 744960K, used 25064K [0x0000000791000000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 719872K, 0% used [0x0000000791000000,0x0000000791000000,0x00000007bcf00000)
  from space 25088K, 99% used [0x00000007bcf00000,0x00000007be77a160,0x00000007be780000)
  to   space 25088K, 0% used [0x00000007be780000,0x00000007be780000,0x00000007c0000000)
 ParOldGen       total 276992K, used 174838K [0x0000000733000000, 0x0000000743e80000, 0x0000000791000000)

Event: 43.560 GC heap after
Heap after GC invocations=44 (full 3):
 PSYoungGen      total 744960K, used 0K [0x0000000791000000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 719872K, 0% used [0x0000000791000000,0x0000000791000000,0x00000007bcf00000)
  from space 25088K, 0% used [0x00000007bcf00000,0x00000007bcf00000,0x00000007be780000)
  to   space 25088K, 0% used [0x00000007be780000,0x00000007be780000,0x00000007c0000000)
 ParOldGen       total 453120K, used 180836K [0x0000000733000000, 0x000000074ea80000, 0x0000000791000000)

If the GC spends enough time doing constant GCs and pausing and not doing real work, other threads and operations could be blocked. Important operations that are blocked may be too important for paper to continue working without, and you can get a crash.

I have always long held that unless people actually use Java profilers, you should stay away from heavily tweaking the startup flags, especially when using things like UnlockExperimentalVMOptions, which by its name already seems like it isn’t production-quality.

I think it’s very possible for you to use your existing hardware for a good experience. I think that you’re overdoing it with the tweaks and you should keep to simple and tweak if you have evidence it works–I don’t think that the linked tweaks you read have any substance to them that would translate from their commercial-grade Minecraft offering to your older, aging computer.

Arguments against his justifications

Alright, here’s my ranting. Don’t worry if this doesn’t make much sense, this is as much for my edification as anything.

It appears the site you linked also says this at the end:

If you are running with 10GB or less memory for MC, you should not adjust these parameters.

I can’t tell whether that means you shouldn’t change parameters from what they offer, of if they mean you shouldn’t apply them at all. At any rate, with your hardware, it seems inappropriate to use these Java flags.

TargetSurvivorRatio : I’m sure your all use to seeing this one suggested. Good news! It’s actually a good flag to use. This setting controls how much of the Survivor space is ABLE to be used before promotion. If survivor gets too full, stuff starts promoting to Old Gen.

Here, I believe he is getting a favorable result, but not for the reason he is thinking. The survivor space can be used beyond the default 50%, but each time a new allocation occurs that pushes the space > 50%, a small collection occurs, moving memory from “FROM” to “TO”. This induces an additional check that makes it possible for stuff to get promoted, but it is not about fullness, it is about surpassing the MaxTenuringThreshold=31.

G1NewSize Percent: […] With these settings, we tell G1 to not use its default 5% for new gen, and instead give it 50% at least! Minecraft has an extremely high a memory allocation rate, ranging to at least 800 Megabytes a second on a 30 player server! And this is mostly short lived objects (Block Position)

#1 this contradicts what I linked above from an actual Java whitepaper: not being able to better adapt object aging in the presence of spikes in object allocation rates . If this guy is claiming Minecraft ha sa high memory allocation rate — AND OF SHORT LIVED OBJECTS – then high survivor ratios are bad.

#2 SHORT LIVED OBJECTS. SHORT LIVED OBJECTS! They belong in eden space where it is inexpensive to garbage collect them… they should not be promoted to Old Gen where it is expensive. They will soon and eventually be collected because they are short lived objects, i.e., things that belong in young generation.

Now, this means MC REALLY needs more focus on New Generation to be able to even support this allocation rate. If your new gen is too small, you will be running new gen collections 1-2+ times per second, which is really bad.You will have so many pauses that TPS has risk of suffering, and the server will not be able to keep up with the cost of GC’s.

The pauses are because the survivor rate is bad and he should instead be modifying the NewRatio, and other ratios which help make sure there is good use of both the survival spaces and the eden spaces.

Of course, he’s also pushing the G1 Garbage collector, where these ratios are more malleable because they aren’t contiguous, but unused allocations are bad, no matter which collector.

G1MixedGCLiveThresholdPercent : Controls when to include Mixed GC’s in the Young GC collection, keeping Old Gen tidy without doing a normal Old Gen GC collection. When your memory is less than this percent, old gen won’t even be included in ‘mixed’ collections. Mixed are not as heavy as a full old collection, so having small incremental cleanups of old keeps memory usage light.

WHAT!? The whole point he was trying to make about tenuring objects earlier from the survivor space (on short lived objects, no less) makes no sense if he lowers the threshold in which old gen should be counted in for expensive Oldgen GC!

The default is 65 and he lowered it to 35, which means at 35% capacity, start checking ALL stuff in the old gen to see if it should be collected. If he’s going to fast-promote things to the old gen, he shouldn’t have all that stuff doubly-checked when only 35% capacity! This is completely backwards!

Memory usage should not be kept “light”. Memory usage should be exactly what is asked by the calling program, in this case, PaperSpigot. Incremental cleanups at 35% of old gen is more expensive than incremental cleanups of new gen, so why not let them live in new gen longer and die there, so they never need to be copied to old gen in the first place?

AlwaysPreTouch : AlwaysPreTouch gets the memory setup and reserved at process start ensuring it is contiguous, improving the efficiency of it more. This improves the operating systems memory access speed.

Sounds solid. Definitely you should do this only when XMX and XMS are near, though, which he does recommend.

MaxGCPauseMillis =100: This setting controls how much memory is used in between the Minimum and Maximum ranges specified for your New Generation. This is a “goal” for how long you want your server to pause for collections. 100 is equal to 2 ticks, aiming for an at most loss of 2 ticks. This will result in a short TPS drop, however Spigot and Paper both can make up for this drop instantly, meaning it will have no meaningful impact to your TPS. 100ms is lower than players can recognize.

This is a tweak that “sounds” like it makes sense. Why not have garbage collection limited in time to an amount a user won’t notice, right?

Well, the thing is when GC needs to happen, you want it to finish, too. And if it doesn’t finish–like what you are experiencing with this multiple collection–then all it does is add overhead and require requeueing of the GC and ends up costing you more than if you left it alone.

I’m really annoyed by this guys recommendations, even if all people using it had 10GB of RAM for Minecraft.

Thank you so much for the informative reply. I gotta admit I don’t really understand it all, but I’ll definitely take your advice and always set the xmx and xms to the same values and also stay away from unneeded flags.

Unfortunately I don’t remember if I was using flags at the time of that error log. Here’s one from yesterday that I was definitely using flags with. https://pastebin.com/TuZ4b4bD And here’s one where I didn’t use flags that just crashed a little while ago. https://pastebin.com/6LYCpHua It usually doesn’t crash on alternating days so I was surprised when it did. I just did a restart of the server and that never works without crashing on crashing days.

Also I would like to take this opportunity to thank you for creating MineOS. I was struggling to get other server wrappers to work at all and with MineOS it was surprisingly easy. If it wasn’t for this segmentation fault I wouldn’t have had any problems at all.

I will look into updating java. Would you recommend changing to Oracle Java or maybe a newer version of Openjdk? Also can you think of a reason why it would crash more often on alternating days? With my limited knowledge of linux I can’t think of anything except checking cron jobs but nothing sticks out to me that runs on alternating days. I do have my bios time set to local time but was able to correctly configure the system to correctly tag files with the correct time thanks to searching this forum.

Thanks again for all the help!

Have you tried hosting a server with fewer plugins? Or at least a different build?

Because while I look at your new logs and it appears the movement from different Java memory spaces looks far more consistent and reliable, your game is still generated a ridiculous amount of new objects in a ridiculously short span of time. Your machine is constantly garbage collecting.

Since this server is already so unstable, I’d really recommend instead to start with a much less plugin-filled server.

Take a look at the load average as reported in your webui (or top from the command line). What is the value of it in the first seconds of starting–what is it during normal operation, and finally, are you able to see what it is sometime before the crash?

And again, something really strikes me as being bad with the build you’re running. Here’s the problem area from your didnt-use-flags output:

GC Heap History (10 events):
Event: 428.973 GC heap before
{Heap before GC invocations=47 (full 3):
 PSYoungGen      total 750080K, used 749929K [0x0000000786580000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 744960K, 100% used [0x0000000786580000,0x00000007b3d00000,0x00000007b3d00000)
  from space 5120K, 97% used [0x00000007b3d00000,0x00000007b41da6a0,0x00000007b4200000)
  to   space 102912K, 0% used [0x00000007b9b80000,0x00000007b9b80000,0x00000007c0000000)
 ParOldGen       total 1889792K, used 304036K [0x0000000713000000, 0x0000000786580000, 0x0000000786580000)
  object space 1889792K, 16% used [0x0000000713000000,0x00000007258e9168,0x0000000786580000)
 Metaspace       used 70414K, capacity 76901K, committed 77144K, reserved 1114112K
  class space    used 10072K, capacity 11573K, committed 11608K, reserved 1048576K
Event: 428.995 GC heap after
Heap after GC invocations=47 (full 3):
 PSYoungGen      total 836608K, used 4635K [0x0000000786580000, 0x00000007c0000000, 0x00000007c0000000)
  eden space 733696K, 0% used [0x0000000786580000,0x0000000786580000,0x00000007b3200000)
  from space 102912K, 4% used [0x00000007b9b80000,0x00000007ba006d78,0x00000007c0000000)
  to   space 105472K, 0% used [0x00000007b3200000,0x00000007b3200000,0x00000007b9900000)
 ParOldGen       total 1889792K, used 304396K [0x0000000713000000, 0x0000000786580000, 0x0000000786580000)
  object space 1889792K, 16% used [0x0000000713000000,0x0000000725943168,0x0000000786580000)

Heap before/after:

eden: 100% -> 0%
from: 97% -> 4%
to: 0% -> 0%
Oldgen: 16% -> 16%

This isn’t specifically alarming except that its telling us that your server is creating 750MB of objects that don’t get needed very quickly (not a bad thing). 30 seconds later though, it does the exact same thing when it reports it is filled up.

So in other words, your server is just straight up generating hundreds of megabytes of objects it never uses again in a short timeframe. It’s unclear whether your computer is able to just keep up with constant generation of thrown-away objects. If this is a plugin that adds some awesome feature, that’s great and all, but it might also mean that particular plugin responsible is putting an undue, disproportionate stress on your machine causing crashes.

Disclaimer:

If you’re interested, we can start a long-winded process of incrementally adding java arguments based on your crash dumps that I believe would improve performance and alleviate crashes. But I don’t want to set the expectation that I can solve these crashes–it might be entirely out of my hands with the build you’re using. But I do believe there’s data-driven ways to adjust it, to better the operability of your server.

Let’s get you started with an XMX/XMS of 2400mb:

-Xmx2400m
-Xms2400m

-Xmn1536m

Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.

I’m making this recommendation based on the fact that paperspigot seems to generate ridiculously high numbers of objects. If these objects are meant to live shortly, then I want them all to fit, get cleaned up quickly, and never make it into the old gen.

Making this change with 1536m means 1536mb of the XMX will go directly to the young generation, where right now its only getting 844mb. This is doubling the size of the young generation, which will make your server more robust to burst-created files that live short lives.

This reduces the old gen, necessarily, to about 900mb, but according to your crash dumps, this should fit OK:

ParOldGen total 1889792K, used 305428K

Seems like only 305mb has been used, but it allocated 1.8GB.

I am not using any plugins at all. I have tried a couple of builds of paper, starting with 186, and now on 195. When I first started using MineOS I tried a vanilla 1.14.4 server and a new world and it just kept crashing that I never even got to see that world.

Usually at the start of the server I noticed the load goes straight to 1.0 and stays there for a few minutes after the server is ready then goes down to about .5 or even .25 with just normal playing. If I fly around or some really taxing stuff I’ve seen it go to 1.5 or even 2.0 one time.

I just tried those flags and crashed before the server even got started so the load never really moved from .25. Here is that crash log: https://pastebin.com/ryi9GSjW

The second time I started it, the load started at about 0 then as soon as I started the server it steadily jumped up about .25 increments every 20 seconds or so reaching 1.5 then steadily dropping back down to about .5 in about 3 minutes. After about 7 minutes of leaving the server idle the load drops to about .25 and it goes up to about .35 and down to .25. After about 10 minutes I joined the server and the load jump up to about .75. I don’t know if it matters but I logged into an area with a lot of farms set up. Just standing around in that area and I noticed the load jumps up to 1.0 then drops down to about .5. Leaving the farm area and the load is dropping down to about .25.

Usually at this point of being on the server this long, it’s pretty stable. I would say 95+% of the crashes happen during server start or first logging in and about 5 minutes after.

After about 20 minutes of the server being up, I issued a restart from the webui. The server did the same as before going from .25 to 1.5 and when it started dropping to about .75 the server crashed (today is a crash day) with this log: https://pastebin.com/Gjx9U0b7 Looking at the server log it does seem like the server finished loading before it crashed cause it says “Done (38.521s)! For help, type “help””.

Thanks again for all your help. I’m worried that this is a hardware problem and I’m wasting all your time. If you want to give up on it, I wouldn’t blame you. I haven’t tried changing the java yet, but that might just be the last thing I try once I figure out how to do it, unless you have any other ideas. I think I’m already using the latest openjdk 8 so I’m gonna try openjdk 11 but I’m pretty sure that’s what I was using under linux lite, so I’m not optimistic.

Unfortunately, I don’t know where to go from here. Just as I expected, the arguments I gave resized the memory areas just fine–and I think those are great changes, but as I delved deeper into the non-memory areas, it made me realize I probably didn’t need to look at the memory in the first place.

siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 0x00007f3886005020

Internal exceptions (10 events):
Event: 5.207 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$comparing$77a9974f$1(Ljava/util/function/Function;Ljava/lang/Object;Ljava/lang/Object;)I> (0x000000076919d610) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/vm/interpreter/l
Event: 5.251 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$or$2(Ljava/util/function/Predicate;Ljava/lang/Object;)Z> (0x0000000769342118) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/vm/interpreter/linkResolver.cpp, line 620]
Event: 5.355 Thread 0x00007f030000a800 Implicit null exception at 0x00007f02f15540db to 0x00007f02f15547e9
Event: 5.379 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.remainder()Lcom/mojang/datafixers/types/templates/TypeTemplate;> (0x00000007698634e0) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/vm/interpreter/linkResolver.cpp, line 620]
Event: 5.402 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$taggedChoiceLazy$0(Ljava/util/Map$Entry;)Lcom/mojang/datafixers/util/Pair;> (0x0000000769982590) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/vm/interpreter/linkResolver.cp
Event: 5.419 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.or(Lcom/mojang/datafixers/types/templates/TypeTemplate;Lcom/mojang/datafixers/types/templates/TypeTemplate;)Lcom/mojang/datafixers/types/templates/TypeTemplate;> (0x0000000769a40e40) thrown at [
Event: 5.440 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$orElse$2(Lcom/mojang/datafixers/functions/PointFreeRule;)Lcom/mojang/datafixers/functions/PointFreeRule;> (0x0000000769b48278) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/
Event: 5.442 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$once$3(Lcom/mojang/datafixers/functions/PointFreeRule;)Lcom/mojang/datafixers/functions/PointFreeRule;> (0x0000000769b6c3f0) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/vm
Event: 5.594 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$getGeneric$10(Ljava/lang/Object;Ljava/util/Map;)Ljava/util/Optional;> (0x000000076d086e20) thrown at [/build/openjdk-8-8u222-b10/src/hotspot/src/share/vm/interpreter/linkResolver.cpp, lin
Event: 6.023 Thread 0x00007f030000a800 Exception <a 'java/lang/NoSuchMethodError': java.lang.Object.lambda$taggedChoiceType$1(Lorg/bukkit/craftbukkit/libs/org/apache/commons/lang3/tuple/Triple;)Lcom/mojang/datafixers/types/Type;> (0x000000076effd610) thrown at [/build/openjdk-8-8u222-b10/src/h

Errors and exceptions all over the place–none of which seem to be pointing to memory as the issue.

Looking at the memory % usage, they are all agreeably low–and balanced. Realistically, if we can’t get vanilla to run, then I suspect the issue is outside memory altogether, because vanilla runs with the least strict java requirements of them all.

So while I’m not particularly interested in giving up, I also fear that the issue we’re having is something I no longer know how to provide useful insight over. I think I know Java pretty well (in terms of startup flags), but beyond that is where my knowledge runs dry.

Give a new Java a try, and see if it works stable-ish for vanilla. If that doesn’t work, you may be right about the hardware.

I was able to install both oracle java 8 and oracle java 12 and they were both crashing with SIGSEGV. I’m throwing in the towel. It would have been great to get it working but I knew going into this that the hardware is pretty old and the chances of it working was slim. I was actually surprised how well it performed but it’s stability is not acceptable.

I am truly grateful for the help you provided. You went above and beyond what I expected! I’m sorry it was all for nothing. Maybe in the future I’ll start over from scratch. If there’s any changes I’ll be sure to let you know.