Mineos-ruby megathread

It’s that time again, everybody! The time when I feel like I really want to discover a new programming language and need a large, ambitious project to direct my learning. That brings us to mineos-ruby.

A few months back I started this by creating the foundation of MineOS scripts–the core functionality of creating servers, starting, stopping, and editing configuration files, and the like.

Over the past few days, I’ve been engineering the new webui portion, which is taking an almost entirely different design approach. Whereas mineos and mineos-node integrated heavily into the underlying system, leveraging existing Linux users and shadow passwords, mineos-ruby will have its own authentication system, modularly implementing arbitrary user/pass, Oauth, mojangid, or any/all of them combined, based on the user’s needs.

For many, mineos-node will continue to fit the bill, and it will be maintained to the best of my ability; I’m hoping mineos-ruby will appeal to both those who have minimal requirements in what they need to manage as well as those who want more scaleable, enterprise-y software.


Perhaps enterprise is a bit of a stretch, but enough people have asked me through the life of mineos-node to be able to control multiple machines with the webui, have reseller-like functionality such as an administrative superuser and regular mineos admins.

No, there are no plans whatsoever to turn mineos-ruby into a billing system, nor will it implement resource limits (e.g., CPU, HDD constraints), but I do want to make it scale. This might mean from one, singular webui you can run several different servers distributed on several different machines. I envision a setup where I connect to and from there I can run a half dozen servers on,, and so forth–all with good bulk-control, rather than the single-server model the webui currently manages.

Like I said, not all users will find this model a necessary upgrade and therefore can stick with mineos-node. But for those who want the newness and polish, it’s likely too that mineos-ruby will be designed to deploy cleanly on solitary machines as well (with the help of maybe docker, etc.)

What’s next?

Well, ruby, like node before it, is a new language to me so this isn’t something I expect to be a working alpha for at least Q1 2017. If any of you are ruby developers, you’re welcome to let me know and we can see where your skills and enthusiasms line up with writing code or UI or whatever it might be.

What I’ve described here today, and the model I’m aiming for, is not set in stone. Much about the webui can and will change and I hope to do it with some insight and use cases from you, the future administrators.

Please use this thread for any discussion and feature requests, opposition pieces, or whatever you think mineos-ruby should be.


hi Sir hexparrot,

congratulations on your journey!

would this still be based using Turnkey or somehow different?

how different?



Will there be an option in mineos-ruby allowing the usage of the “legacy” authentication method that mineos and mineos-node use? If not, how will file transfers and “remote console” via ssh work? Will the new UI provide such functionality (for file transfers) or will we have to turn to using that legacy method to be able to do such actions?

I’m more than likely wrong, but wouldn’t OAuth and/or MojangID require “valid” https certs, as well as some sort of server-side allowance or something? I’ve seen/used a site in the past that had an option to use MojangID for Minecraft username verification purposes but I’m not sure how it works as I’ve never tried it. And OAuth I know Google has but I’m not sure if it’s usable without “valid” https certs and/or something else (yes I know this site does have Google OAuth but I mean it has a “valid” https cert and is more “static” if that even makes any sense; I probably should’ve just done some research instead of asking questions…)

Will the UI have to be installed on a per-server basis followed by linking via something somewhere in the UI or it’s configuration files? I know we’re able to disable the Python and Node UIs and just use the underlying scripts via commandline so is that how it’d work with Ruby? What about if the UI on the main system somehow crashes (or we decide to restart the machine or stop the UIs process)? How will the back-end systems handle it? Would the disabled UIs then become active for local administration or stay disabled regardless of the front-end server’s UI status? (I know Python and Node handles crashes and restarts very well so I know servers won’t go down because the UI did.) I have a question I believe I already know the answer to: What about if you wanted to have a server act ONLY as a front-end and then use back-ends to host all servers? That leads into the next one: Will you be able to set the UI functionality to something like “front-end-only mode”, “back-end-only mode”, and/or “front-end AND back-end mode”? Which again leads to yet another question: What about routing? Would it be possible to have the front-end act as a router/hub, forwarding both ports and packets to back-end servers using IPtables and/or UFW? (I know you can already do that but I mean having a configuration page for the purpose of configuring that?)

A bit more info on the forwarding/routing part:

It’d make it easier for most people to be able to get port forwarding setup on their router for the front-end servers and then not have to worry about the back-end servers; a lot of people I know actually use DMZ, even on a VM due to laziness. But with a multi-server layout it’d actually make more sense to use DMZ on the front-end, and from there forward ports to back-ends, effectively having the front-end act like a firewall for back-ends (but not only that, the entire network; and while DMZ is typically less secure since the front-end’s directly exposed to the “outside” inbound traffic and connections and such, with the proper configuration/settings it’d be just as secure as it would’ve been if you’d just port forward from the router itself, or possibly even more secure than using the router since you have more tools you can work with, such as packet monitoring, vNICs (can be used on physical systems by physical systems without virtualization software), and etc.

A bit more info on the forwarding/routing part but without the discussion of back-ends (one host does it all):

For those who’d use a “front-end and back-end” configuration, they’d also be able to host servers on localhost and then forward inbound packets to the open localhost port, which is essentially what the “smarter” people do when they use the BungeeCord front-end(s) on the same system as their Spigot back-end(s)

Extra “might as well not even be there” information:

I think I’d probably use a VM for the front-end and then the physical systems would be the back-end; or a weak system such as my laptop or my mom’s 16+ year old desktop that’s still here for some reason that doesn’t do anything I ask (surprisingly it boots from USB but I can’t install an OS because booting from USB seems to disable the internal drive but booting off the internal drive works fine; also it obviously lacks RAM being that it only has 256MB and DDR+DDR2 RAM’s more expensive than DDR3 for some reason (don’t know if I’m the only one who noticed that but it’s weird and makes no sense for me to spend more on older tech) but I previously wanted to use that for BungeeCord and then forward from there to my other servers and VM whenever I had the VM up but it obviously didn’t work)

:joy: here we go again :joy:

From my own software development and integration experience:

  • I’ve found it best to implement an underlying CLI API
  • UI would use the API to implement processing.
  • As the API would be interface agnostic it would continue to be the main focus of development.
  • UIs could be developed independantly of the API.
  • UIs would all remain vital interfaces.

I’m not saying do this. :slight_smile: Just saying what I’ve done in the past and has worked for me across the decades. :stuck_out_tongue: :wink:

Well, whenever the final product gets completed, there will likely be a way I distribute it, and Turnkey may or may not be an option. I have yet to see how a) ruby will stack up against node in terms of footprint and b) whether my model of distributed servers makes the webui so different it even makes sense to try to deploy it on single nodes. In all likelihood, node will continue to be the main flagship release despite there being something newer and shinier because it just does the job it’s meant to do already.

In my mind, the new model will delegate all authentication to a service (background process) in an entirely separate space (host/container) than the minecraft servers themselves. This means authentication could literally be any type desired, whether it is username/password or any other type I listed. This also means users’ interaction will be only with this machine, which in turn manages other worker machines–I understand this might feel a bit foreign, but there’s reasons for it.

One key thing to remember is that mineos-ruby is not just a ruby version of the mineos webui as we know it. While every new webui I make builds and improves on the last, simply re-engineering a webui for a fourth time doesn’t appeal to me much: creating a new version that helps broaden the audiences and use-cases does appeal to me, though. With that said, I know there’s going to be a lot of effort into making this webui also work with literally a single, standalone machine because I know there’s a lot of people who want to see the overall administration workflows improve, too.

Remember, none of this is set in stone–asking these questions is perfect because it’ll help me see all the sides before committing to a model. The current idea is:

all remote consoles will be available through the webui on the – let’s call it – primary front end. I actually already have this working. Each of the worker machines (which run the java minecraft process) has a lightweight listener which can do all the core functionality (equivalent to mineos.js). A remote browser then can see–live–all the updates as it’s passed from the worker → primary front-end → X number of administrators using the webui.

File transfers are going to work similarly. The primary front-end will be the place where administrators upload anything from server jars to minecraft archives (worlds). The front-end will also be responsible for transmitting these files to the worker.

So that raises the question: why transmit to one machine just so it can be transmitted again? Well, it seems a bit off only if we’re looking at it from a single-node configuration. But things downloading an 145 megabyte profile… it seems more frugal to spend the time downloading it once and then to push it to multiple machines–which are local network transmissions–rather than downloading it repeatedly on neighboring machines from the internet.

This would help eliminate the need to ever SSH into the actual minecraft server’s node. It might also (maybe?) help centralize things like server archives, which get pushed to the primary front end and wont exist/take up space on the actual java-running machines themselves.

If Oauth does, then I can just scrap that particular idea. MojangID though definitely doesn’t need a valid cert–you can do mojang auth even through command line, which is effectively what the primary front end would do anyway.

The UI will exist by being hosted by the primary front-end. There will be one UI driven by a service on the same host, and 1 or more workers who have no UI whatsoever, but an established trust with the primary front&back end.

Since the UI is completely separate from the worker machines, the UI can crash 100 times and have no effect on the servers themselves.

This is pretty much exactly what the intended idea is right now. Full isolation from the UI, uploads, downloads–from the workers.

This is actually the biggest hiccup. Whereas I’m delegating the front-end machine to handle all downloading of external resources and uploading of imported worlds, routing to each individual machines is … separate.

Yes, it is technically possible for routing to go through the front-end, through the use of ip routing utilities, e.g., iptables. Presumably… well, the answer I have to this is much how it would be handled with vSphere + ESXi hosts, which is: each of the satellite workers have their own IP address, so they are equally as routable with their IP address from say, your router. Right now people do use DMZ and that’s a shorthand to make any number of ports open and available, but this current model I’m describing says that creating new servers would also necessitate touching the router port-forwarding settings as well.

So here’s the predicament: this is offloading work that single machines used to solve with DMZ by giving it to the admin to do hole-punching and port-forwarding individually…or all the stuff goes through the front end. I’m opting for the first, because while it cannot be automated like routing would through the already-open front-end, it also makes it a point-of-failure, which is far more critical and scary.

This is actually what could be seen as an ideal case. There’s not a lot of work that’s going to happen on the front-end system; uploading, sending profiles here and there, and storing archives–none of this is CPU intensive, nor is hosting the webui.

These are very well-thought-out concerns, and having to express my vision in writing definitely helps me dismiss or reinforce the idea being good or bad, so thank you very much for the post!

Use SSH connections and port forward through them from the primary server to the slave servers. Use file streams at each end point. Stream process to process across the connection. The only iptables work would have to be done on the primary server. - Just a thought.

This assumes, of course, that the admin is OK with the primary front-end being a point-of-failure for the other servers, which isn’t a very good situation. The webui luckily will be process-separate from iptables, but that also means the primary front-end has to be on 24/7, and cannot be restarted.


If someone wants the primary server to be the gateway then this is valid. Otherwise, secondary servers should either promote themselves or act as responders with the primary server relaying connections back to the client with a relay address.

Having one line of failure is far from perfect in a production environment but this is effectively what most infrastructures already encompass in a gateway firewall. A better solution would involve network services intended to resolve such functionality: ZeroConf/Bonjour/MDNS TXT records, NDP/ICMP net ids, Active Directory…

I implement services as LXC/LXD VMs behind a MACVLAN on a VLAN encompassed by a software bridge with DHCP, DNS propagation and bi-directional network routing services piggy-backed. So I can fire-up whatever I need as and when I need it. A little scripting can go a long way. :wink:

That was the plan: to help you plan for Ruby (since I’m good with planning/helping to plan).

There are however, people who do use/build networking servers (meaning that’s their router/hub) as well as others who may have more than one internet connection. A really good example of why it’d be a good idea for the UI to be able to configure the front-end to handle routing packets to back-end servers would be for anyone who has a layout like so:

Example 1:

  • MineOS front-end/router
  • Ethernet Connection 1
    • Internet/ISP
  • Ethernet Connection 2
    • MineOS back-end

Example 2:

  • MineOS front-end/router
  • Ethernet Connection 1
    • Internet/ISP
  • Ethernet Connection 2
    • Ethernet Network Switch
      • MineOS back-end 1
      • MineOS back-end 2
      • MineOS back-end 3
      • [And so on]

Example 3:

  • MineOS front-end/router
  • Ethernet Connection 1
    • Internet/ISP
  • Ethernet Connection 2
    • MineOS back-end 1
  • Ethernet Connection 3
    • MineOS back-end 2
  • Ethernet Connection 4
    • MineOS back-end 3

For some, seeing a lineup like that as well as knowing that Minecraft servers (like most other network-enabled programs) require IP addresses to function, DHCP may come to mind. However, chances are, we’d be manually configuring the IPs for each back-end and the front-end’s second Ethernet Connection, especially since we know having DHCP would cause conflicts with the main network (for those who have a router acting as a DHCP server with the front-end connected to that router (not shown in any examples above) but only if the subnet and IP ranges match the router’s; but then again there’s the possibility of specifying which connection to forward to but still may cause confusion for some. But either way people who have 2 or more connections and/or use a layout as such should know how to configure IPs manually or if it’s by it’s self/on it’s own maybe DHCP would be used)

I thought of that kind of layout for two reasons:
1: You said

2: I’ve been working on/drawing up some networking diagrams which includes a WDS-enabled/interconnected mesh network (well realistically it’s a hybrid network, however the main part/link is wireless between each router; so it’s a multi-router network interconnected via Wi-Fi (5GHz, and anyone wanting to connect will either have to use Ethernet ports on one of the routers, or connect via 2.4GHz simply to minimize interference in the network’s performance), and I came up with that based on my desktop currently being connected to my main router over Wi-Fi via Ethernet on my secondary router, which I believe I have mentioned in the past back when I had my ‘server’ connected in that way)

I have another suggestion for you concerning the deployment: if/when you decide to create a preconfigured ISO containing MineOS-Ruby, you should include 3 options (or 2-3 different ISOs) which would be “Front-end Installation”, “Back-end Installation”, and last but not least, “Front-end Networking Installation” (that is, if you do decide to allow the UI/front-end to handle routing for back-ends but that one would specifically be for those who use the front-end as the router/hub of the entire network)

I feel like I explained everything terribly, mostly because school’s got me tired and sleepy (as always), so sorry if it isn’t.

EDIT: Added another layout below that’d be less-common but not uncommon:

Example 4:

  • MineOS front-end
  • Ethernet Connection
    • Internet/ISP
      • MineOS back-end 1
      • MineOS back-end 2
      • MineOS back-end 3

That layout would probably be less secure in some ways but that’s where VPN (like PPTP) and/or SSH tunnels come in, as @Silluinglin stated.

Two examples of why anyone would even have a layout like the one shown in Example 4 would be
1: The front-end is located in a data-center/VPS while the back-ends are located in their homes
2: The front-end is located in their home while the back-ends are located in data-centers/VPS

1 Like