VoxelWar Discussion thread

Miscellaneous projects by the Build and Shoot community.
1001 posts Page 66 of 67 First unread post
LeCom


Ok just posting this here cause I have a thread in this section and I don't want to put it somewhere and wait until admins move it to this section. So, as some guys have been shouting across the forums, we have been working on a new remake. This is better than vxw because I wrote a completely new server for this, used Dlang instead of trying to do highly complex stuff with C, and you can swap renderers as you want.
Pics: http://imgur.com/a/A26X6
Description of pics in order: team selection menu with "classic" as background, minimap of "river", blur after getting damage from a grenade (grey stuff is smoke), picture of a scope the way chameleon suggested it (inside the scope circle, everything is zoomed in, and you actually see "through your weapon" position- and rotation-wise), what happens if you shoot an SMG onto ground and onto a pontoon and blow up a grenade next to said pontoon (yes, they actually move on water like irl), the new colour selection thingy, and idk if one can see it well, but in the last pic, the visibility range is decreased cause it's a desert map and visibility, blur and fog colour slowly change.
Some of the pics use an older UI design, now the HP and ammo bar are moved to the bottom left corner.
Client download:
newvxw.zip
Client
(1.06 MiB) Downloaded 374 times
Server download: https://www.dropbox.com/s/thft2j2c2l0p2 ... r.zip?dl=1
Repositories:
client: https://github.com/LeComm/aof-client , voxlap renderer: https://github.com/LeComm/aofclient-voxlap-renderer , server: https://github.com/LeComm/aof-server

THANKS TO: Chameleon for doing gun models, Warp for doing player models, bloodfox for doing sounds, and ByteBit for hosting a server ^^
bloodfox
Post Demon
Post Demon
Posts: 2206
Joined: Mon Oct 21, 2013 4:32 pm


oh yeah and guys don't worry about that horrible looking UI. I'm working on it.
Monstarules
Organizer
Organizer
Posts: 494
Joined: Sun Dec 16, 2012 4:44 pm


I could always offer my skillset if could be used in coding, designs, and sounds.
STTT
Deuce
Posts: 1
Joined: Wed Jun 01, 2016 6:18 am


Couldn`t connect to localhost:32887
Wonderful. How can i fix it?
Lincent
Veterans
Veterans
Posts: 693
Joined: Wed Mar 27, 2013 9:47 pm


This is still a thing?
LeCom


Monstarules wrote:
I could always offer my skillset if could be used in coding, designs, and sounds.
You're free to change anything and upload it somewhere.
STTT wrote:
Couldn`t connect to localhost:32887
Wonderful. How can i fix it?
By running the server.
Marisa Kirisame
Deuced Up
Posts: 152
Joined: Sat Sep 21, 2013 10:52 pm


bloodfox wrote:
oh yeah and guys don't worry about that horrible looking UI. I'm working on it.
Please never change it, it's glorious.

Or at least have it as a possible option. If you have support for switchable themes feel free to call it "Lemonade Stand".
Chameleon wrote:
Shaders can s*** a d***.
...
By the way, when increasing model detail, bounded cube models look way better, while OpenGL might even look worse.
Or better, if you use shaders correctly.

Contrary to popular belief shaders are not restricted to bogging your computer down by showing off fancy SSAO + motion blur bullshit. I believe motion blur is typically implemented by rendering to an FBO texture, then rendering that texture with alpha blending. There is no need to explicitly use a shader for this, even though implicitly your driver is probably using one already.

Shaders can actually be utilised to improve performance, mostly by reducing the amount of shit that has to go between the CPU and GPU, or in some cases between parts of the GPU. I've spent the last day and a bit abusing a geometry shader in ways that you should never abuse a geometry shader and I managed to squeeze out about 1000FPS w/ 127.5 spherical fog and a simple passthrough fragment shader.

Basically, ponder this question: What's faster for lighting a single dynamic light? 1. Calculating each vertex colour on the GPU as it gets passed along the pipeline, or 2. calculating the colour on the CPU and then banging it across to the GPU every frame? #1 is the correct answer, unless your GPU well and truly sucks balls but somehow has really fast STREAM_DRAW VBO updates. I mean, heck, you can even do it with the fixed function pipeline, but these days if you have a shader unit then the fixed function pipeline is implemented in shaders in the driver anyway.

Having said that, I'm convinced that it's possible to make OpenGL look like Voxlap, although I didn't actually have success with emulating the up/down look-at-players bug/feature last time I tried. Still, GL_POINTS is a thing, and so is gl_PointSize... but if you prefer bounded cubes, OpenGL handles that fairly nicely. Heck, there's no need to implement proper diffuse lighting - the earlier versions of Iceball just used per-face brightness as per AoS 0.x.

TL;DR shaders as a technology are fucking awesome, they just tend to be used badly.

- <3 -

With that said, I have a rule: Where possible, aim for GL 2.1. If your GPU doesn't support that on any OS, it's probably not worth supporting. If it doesn't support that on Windows, it probably supports it on Linux and you'd probably get more mileage + faster gfx out of your computer if you switched.

Main reason for 2.1 is, you guessed it, shaders. Skeletal animation is one place where vertex shaders improve performance. And of course, if you really want to you can implement a raytracer/beamtracer renderer, which at a low enough resolution will actually render faster than a triangle renderer.

Also remember to support GL Core 3.1+ stuff (NOT compat profiles, seriously, don't bother with compat mode it'll harm performance and screw over anyone using the Mesa drivers). GL Core 3.3 seems to be the most common target, and even Mesa llvmpipe supports it... although then again, I get 30FPS on mesa.vxl in my renderer... on a Skylake i5, which has an integrated GPU that can render it at about 1000FPS under the same conditions. Best not to use software GL for anything other than checking if something works.

If you don't have stuff that supports GL Core 3.3, don't worry too much, because I do have that stuff! In the meantime, just stick to GL 2.1, and I'll try to get you a codebase that works nicely with 2.1 and Core 3.3.

- <3 -

Oh, and one more thing:

https://github.com/LeComm/aofclient-vox ... 69ff643c3c

You've got a merge conflict. You may want to fix that.
LeCom


Does this UI you want to have include the "update" with ammo/HP bar on the left? I have no taste if it comes to these things, so it's interesting to see what people actually think.
The trouble with shaders is that they're not an equivalent to the CPU. They have their limitations in functionality. For example, I could raycast one ray per pixel, maybe even spread a ray into several pixels, but I can not e.g. blit something afterwards. Well right, compute shaders, but they've been introduced just now. In any case I wonder whether a very simple GLSL raycaster could beat Voxlap.
And don't worry about that git commit problem thing, windows plebs will use the precompiled OMF .obj files anyways, while linux users know how to solve it.
Marisa Kirisame
Deuced Up
Posts: 152
Joined: Sat Sep 21, 2013 10:52 pm


LeCom wrote:
The trouble with shaders is that they're not an equivalent to the CPU. They have their limitations in functionality.
I've seen the Intel GenX instruction set. It's quite different, but AFAIK if you have a thread in single-lane mode you can actually do general purpose stuff on it. But that's getting waaaaaaay too close to the metal.
LeCom wrote:
For example, I could raycast one ray per pixel, maybe even spread a ray into several pixels, but I can not e.g. blit something afterwards.
This is completely incorrect. Raytracing on the GPU is not immediately followed by a GL buffer swap, and all OpenGL implementations I'm aware of support double-buffering.

Heck, even in GL 1.1 you can just straight up blit stuff, although if you're just blitting from the CPU to the GPU it might get a bit slow (PBOs can help if you have them though, which you probably do as you have more than just GL 1.1). If on the other hand you have all the source image data on the GPU, then
Code: Select all
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
is a thing. (Unless you're on a Core profile, in which case you'd switch to a shader that doesn't do a perspective transform... and disable GL_DEPTH_TEST of course.) OpenGL isn't limited to 3D, y'know.
LeCom wrote:
Well right, compute shaders, but they've been introduced just now.
They were introduced and integrated into GL 4.3 in 2012, but if we're talking about the Mesa drivers then yeah, they're only just being introduced. (It's not even in the 12.0.0-rc1 beta, it IS in Git though.)

However, compute shaders aren't even necessary here. If we assume GL 2.1 + GL_ARB_framebuffer_object, which is the case for any non-shit GMA on Linux w/ recent enough Mesa (and a few shit ones too AFAIK), then we CAN write a raytracer, and in fact I HAVE written a raytracer which gets 30fps on a GMA 4500MHD rendering a Block 'n' Load map at 320x180 w/ a 1/4-scale depth+shadow pass (that pass is 80x45, by the way). All you need to do is draw two triangles using a suitable fragment shader.

Even without FBOs it's possible to raytrace but by that point it's effectively impossible to do multi-pass stuff. 2.1+FBO does cover pretty much everyone though, so just stick with that as your minimum system requirements.

If you wish to "blit" in this case, you can use an extra FBO as a render target for the HUD stuff, and then add that as an extra texture to your raytracer. It can even improve performance if you are tracing per ray, as if a pixel is completely blocked by a HUD element then you don't even need to trace it.

If you wish to do compute stuff in such a way that the CPU can read it, PBOs were integrated into GL 2.1, which are good for improving glReadPixels performance.

If you wish to do compute stuff EFFICIENTLY, on the other hand, GL 3.0 has integer texture formats and transform feedback, and GL 3.2 Core also has geometry shaders (which can, in turn, be plugged into transform feedback).
LeCom wrote:
In any case I wonder whether a very simple GLSL raycaster could beat Voxlap.
Well no, but that's purely because a very simple GLSL raycaster is very simple, and Voxlap is very complex and highly optimised. However, if Voxlap traces every pixel, which is an "embarassingly parallel" problem, then the GPU really shines. The scatter-gather unit helps a lot here.
LeCom wrote:
And don't worry about that git commit problem thing, windows plebs will use the precompiled OMF .obj files anyways, while linux users know how to solve it.
And then they'll tell you to fix it.

Seriously, just fix it.
Kuunikal
Deuced Up
Posts: 187
Joined: Sun Mar 13, 2016 8:37 pm


How's VXW coming along?
Chameleon
Modder
Modder
Posts: 601
Joined: Thu Nov 22, 2012 6:41 pm


Kuunikal wrote:
How's VXW coming along?
Not coming along at all.
But LeCom has made another engine (he posted this engine in this thread, because engine is still in early production aka I haven't done shit). And work on that engine starts now
LeCom


No wtf cham are you mentally challenged stop spreading lies. This thing had been finished for like 2 months already, and it has by far more "gameplay" content than IB. And you, other guy, should have checked the previous page first
Marisa Kirisame
Deuced Up
Posts: 152
Joined: Sat Sep 21, 2013 10:52 pm


LeCom (paraphrased) wrote:
it's pretty much ready now stop waiting around and go play it you useless fucks
Nice to see we have something in common.

My first suggestion: Start a new thread. You can be like me and chuck it into the AoS 0.x Discussion section and let it stay there until someone decides to dump it into the tiniest corner of the forums they can think of. But basically the OP should be updateable and pertain to the right version.

My second suggestion: Run a 64-bit OS for testing. Your code is not 64-bit clean - the server segfaults in enet_host_compress_with_range_coder natively, so at the moment I have to run the supplied Python runtime in Wine. Running a VM in qemu-system-x86_64 should be fine. For a distro, you could go with Void Linux - I use the LXDE base (non-musl) on my desktop, and while I *am* chewing about 1.5GB of RAM right now, most of that is Firefox - realistically you could get away with a 384MB VM if you don't open much. Void also appears to have dmd in its package repository. And most importantly, Void has binary packages, so you don't have to waste a lot of time compiling things.

Elaborating on "64-bit clean": size_t != int. It tends to be the width of the address bus. I had to fix a bug in Iceball at least once because someone committed something which had this mistake on it and they'd only tested it on 32-bit.
LeCom


Yea I might actually just spam this somewhere.

So, 64 bit means 64 bit address bus and 64 bit register size. So this means that int is still 32 bit on 64 bit systems? Sheet, now how fucking gay is that.
Well, I would fix it if I still had the motivation to work on it, but well I probably should advertise it somewhere where it will actually be seen or something. Btw how come it crashes being a 32 bit application? Don't 64 bit CPUs have a 32 bit emulation mode?
longbyte1
Deuced Up
Posts: 336
Joined: Sun Jul 21, 2013 7:27 pm


Marisa Kirisame wrote:
LeCom (paraphrased) wrote:
it's pretty much ready now stop waiting around and go play it you useless fucks
My second suggestion: Run a 64-bit OS for testing. Your code is not 64-bit clean - the server segfaults in enet_host_compress_with_range_coder natively, so at the moment I have to run the supplied Python runtime in Wine. Running a VM in qemu-system-x86_64 should be fine. For a distro, you could go with Void Linux - I use the LXDE base (non-musl) on my desktop, and while I *am* chewing about 1.5GB of RAM right now, most of that is Firefox - realistically you could get away with a 384MB VM if you don't open much. Void also appears to have dmd in its package repository. And most importantly, Void has binary packages, so you don't have to waste a lot of time compiling things.

Elaborating on "64-bit clean": size_t != int. It tends to be the width of the address bus. I had to fix a bug in Iceball at least once because someone committed something which had this mistake on it and they'd only tested it on 32-bit.
Do you happen to be cutting corners in your test environment? I mean to say this in a polite manner; the reason is that many distributions do not have pure x86-64 binaries when they say "64-bit." Some carry both x86 and x86-64 stuff, so it would be best to test on a distribution that only carries x86-64 libraries and binaries, to make sure that something is not depending on a 32-bit (i386/i686) library. I don't think it will directly solve the problem but it's the cleanest one can get when testing.

also 300th post Blue_BigSmile
1001 posts Page 66 of 67 First unread post
Return to “Noteworthy Picks”

Who is online

Users browsing this forum: No registered users and 6 guests