VoxelWar Discussion thread

Miscellaneous projects by the Build and Shoot community.
1066 posts Page 71 of 72
LeCom
Post Demon
Post Demon
Posts: 866
Joined: Sat May 24, 2014 8:07 am


Does this UI you want to have include the "update" with ammo/HP bar on the left? I have no taste if it comes to these things, so it's interesting to see what people actually think.
The trouble with shaders is that they're not an equivalent to the CPU. They have their limitations in functionality. For example, I could raycast one ray per pixel, maybe even spread a ray into several pixels, but I can not e.g. blit something afterwards. Well right, compute shaders, but they've been introduced just now. In any case I wonder whether a very simple GLSL raycaster could beat Voxlap.
And don't worry about that git commit problem thing, windows plebs will use the precompiled OMF .obj files anyways, while linux users know how to solve it.
Marisa Kirisame
Deuced Up
Posts: 155
Joined: Sat Sep 21, 2013 10:52 pm


LeCom wrote:
The trouble with shaders is that they're not an equivalent to the CPU. They have their limitations in functionality.
I've seen the Intel GenX instruction set. It's quite different, but AFAIK if you have a thread in single-lane mode you can actually do general purpose stuff on it. But that's getting waaaaaaay too close to the metal.
LeCom wrote:
For example, I could raycast one ray per pixel, maybe even spread a ray into several pixels, but I can not e.g. blit something afterwards.
This is completely incorrect. Raytracing on the GPU is not immediately followed by a GL buffer swap, and all OpenGL implementations I'm aware of support double-buffering.

Heck, even in GL 1.1 you can just straight up blit stuff, although if you're just blitting from the CPU to the GPU it might get a bit slow (PBOs can help if you have them though, which you probably do as you have more than just GL 1.1). If on the other hand you have all the source image data on the GPU, then
Code: Select all
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
is a thing. (Unless you're on a Core profile, in which case you'd switch to a shader that doesn't do a perspective transform... and disable GL_DEPTH_TEST of course.) OpenGL isn't limited to 3D, y'know.
LeCom wrote:
Well right, compute shaders, but they've been introduced just now.
They were introduced and integrated into GL 4.3 in 2012, but if we're talking about the Mesa drivers then yeah, they're only just being introduced. (It's not even in the 12.0.0-rc1 beta, it IS in Git though.)

However, compute shaders aren't even necessary here. If we assume GL 2.1 + GL_ARB_framebuffer_object, which is the case for any non-shit GMA on Linux w/ recent enough Mesa (and a few shit ones too AFAIK), then we CAN write a raytracer, and in fact I HAVE written a raytracer which gets 30fps on a GMA 4500MHD rendering a Block 'n' Load map at 320x180 w/ a 1/4-scale depth+shadow pass (that pass is 80x45, by the way). All you need to do is draw two triangles using a suitable fragment shader.

Even without FBOs it's possible to raytrace but by that point it's effectively impossible to do multi-pass stuff. 2.1+FBO does cover pretty much everyone though, so just stick with that as your minimum system requirements.

If you wish to "blit" in this case, you can use an extra FBO as a render target for the HUD stuff, and then add that as an extra texture to your raytracer. It can even improve performance if you are tracing per ray, as if a pixel is completely blocked by a HUD element then you don't even need to trace it.

If you wish to do compute stuff in such a way that the CPU can read it, PBOs were integrated into GL 2.1, which are good for improving glReadPixels performance.

If you wish to do compute stuff EFFICIENTLY, on the other hand, GL 3.0 has integer texture formats and transform feedback, and GL 3.2 Core also has geometry shaders (which can, in turn, be plugged into transform feedback).
LeCom wrote:
In any case I wonder whether a very simple GLSL raycaster could beat Voxlap.
Well no, but that's purely because a very simple GLSL raycaster is very simple, and Voxlap is very complex and highly optimised. However, if Voxlap traces every pixel, which is an "embarassingly parallel" problem, then the GPU really shines. The scatter-gather unit helps a lot here.
LeCom wrote:
And don't worry about that git commit problem thing, windows plebs will use the precompiled OMF .obj files anyways, while linux users know how to solve it.
And then they'll tell you to fix it.

Seriously, just fix it.
Kuunikal
Deuced Up
Posts: 184
Joined: Sun Mar 13, 2016 8:37 pm


How's VXW coming along?
Chameleon
Modder
Modder
Posts: 601
Joined: Thu Nov 22, 2012 6:41 pm


Kuunikal wrote:
How's VXW coming along?
Not coming along at all.
But LeCom has made another engine (he posted this engine in this thread, because engine is still in early production aka I haven't done shit). And work on that engine starts now
LeCom
Post Demon
Post Demon
Posts: 866
Joined: Sat May 24, 2014 8:07 am


No wtf cham are you mentally challenged stop spreading lies. This thing had been finished for like 2 months already, and it has by far more "gameplay" content than IB. And you, other guy, should have checked the previous page first
Marisa Kirisame
Deuced Up
Posts: 155
Joined: Sat Sep 21, 2013 10:52 pm


LeCom (paraphrased) wrote:
it's pretty much ready now stop waiting around and go play it you useless fucks
Nice to see we have something in common.

My first suggestion: Start a new thread. You can be like me and chuck it into the AoS 0.x Discussion section and let it stay there until someone decides to dump it into the tiniest corner of the forums they can think of. But basically the OP should be updateable and pertain to the right version.

My second suggestion: Run a 64-bit OS for testing. Your code is not 64-bit clean - the server segfaults in enet_host_compress_with_range_coder natively, so at the moment I have to run the supplied Python runtime in Wine. Running a VM in qemu-system-x86_64 should be fine. For a distro, you could go with Void Linux - I use the LXDE base (non-musl) on my desktop, and while I *am* chewing about 1.5GB of RAM right now, most of that is Firefox - realistically you could get away with a 384MB VM if you don't open much. Void also appears to have dmd in its package repository. And most importantly, Void has binary packages, so you don't have to waste a lot of time compiling things.

Elaborating on "64-bit clean": size_t != int. It tends to be the width of the address bus. I had to fix a bug in Iceball at least once because someone committed something which had this mistake on it and they'd only tested it on 32-bit.
LeCom
Post Demon
Post Demon
Posts: 866
Joined: Sat May 24, 2014 8:07 am


Yea I might actually just spam this somewhere.

So, 64 bit means 64 bit address bus and 64 bit register size. So this means that int is still 32 bit on 64 bit systems? Sheet, now how fucking gay is that.
Well, I would fix it if I still had the motivation to work on it, but well I probably should advertise it somewhere where it will actually be seen or something. Btw how come it crashes being a 32 bit application? Don't 64 bit CPUs have a 32 bit emulation mode?
longbyte1
Deuced Up
Posts: 336
Joined: Sun Jul 21, 2013 7:27 pm


Marisa Kirisame wrote:
LeCom (paraphrased) wrote:
it's pretty much ready now stop waiting around and go play it you useless fucks
My second suggestion: Run a 64-bit OS for testing. Your code is not 64-bit clean - the server segfaults in enet_host_compress_with_range_coder natively, so at the moment I have to run the supplied Python runtime in Wine. Running a VM in qemu-system-x86_64 should be fine. For a distro, you could go with Void Linux - I use the LXDE base (non-musl) on my desktop, and while I *am* chewing about 1.5GB of RAM right now, most of that is Firefox - realistically you could get away with a 384MB VM if you don't open much. Void also appears to have dmd in its package repository. And most importantly, Void has binary packages, so you don't have to waste a lot of time compiling things.

Elaborating on "64-bit clean": size_t != int. It tends to be the width of the address bus. I had to fix a bug in Iceball at least once because someone committed something which had this mistake on it and they'd only tested it on 32-bit.
Do you happen to be cutting corners in your test environment? I mean to say this in a polite manner; the reason is that many distributions do not have pure x86-64 binaries when they say "64-bit." Some carry both x86 and x86-64 stuff, so it would be best to test on a distribution that only carries x86-64 libraries and binaries, to make sure that something is not depending on a 32-bit (i386/i686) library. I don't think it will directly solve the problem but it's the cleanest one can get when testing.

also 300th post Blue_BigSmile
Marisa Kirisame
Deuced Up
Posts: 155
Joined: Sat Sep 21, 2013 10:52 pm


LeCom wrote:
So, 64 bit means 64 bit address bus and 64 bit register size. So this means that int is still 32 bit on 64 bit systems? Sheet, now how fucking gay is that.
Well, I would fix it if I still had the motivation to work on it, but well I probably should advertise it somewhere where it will actually be seen or something. Btw how come it crashes being a 32 bit application? Don't 64 bit CPUs have a 32 bit emulation mode?
The server crashes when run with a 64-bit Python loading a 64-bit libenet.so. It works fine in 32 bits.

Seriously, set up a 64-bit VM in Qemu or something and pick a fairly lightweight distro.
longbyte1 wrote:
Do you happen to be cutting corners in your test environment?
No. The code is not 64-bit clean, so when I try to run the server natively it crashes, but when I run the plebserver in Wine it works... and the last time I tried compiling the code there were a LOT of errors pertaining to mismatched type sizes, mostly surrounding size_t.
LeCom
Post Demon
Post Demon
Posts: 866
Joined: Sat May 24, 2014 8:07 am


Sure you're using a 64 bit cvxl.so?
Aside from me having absolutely no experience with these VMs, I'd also have to free some space on my disk and set one up and it's not worth the result right now. In the long run, I have access to a 64 bit machine once a week which I can use to change my code. Can't you simply use 32 bit emulation until then? It's not like my stuff is the only software that isn't 64-bit-ready.
longbyte1
Deuced Up
Posts: 336
Joined: Sun Jul 21, 2013 7:27 pm


LeCom wrote:
Can't you simply use 32 bit emulation until then? It's not like my stuff is the only software that isn't 64-bit-ready.
No, Marisa seems more focused in pointing out your code smells.
Edgamer63
Deuce
Posts: 17
Joined: Sat Aug 09, 2014 5:44 pm


I wanttodownload and play this ;-; ... can someone pass me a link to download this game Blue_Normal ?
LeCom
Post Demon
Post Demon
Posts: 866
Joined: Sat May 24, 2014 8:07 am


Edgamer63 wrote:
I wanttodownload and play this ;-; ... can someone pass me a link to download this game Blue_Normal ?
Just one single page back, precisely http://buildandshoot.com/viewtopic.php? ... 35#p153110
Marisa Kirisame
Deuced Up
Posts: 155
Joined: Sat Sep 21, 2013 10:52 pm


LeCom wrote:
Sure you're using a 64 bit cvxl.so?
Of course, since I built it myself:
Code: Select all
$ file cvxl.so
cvxl.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=9b0b68f23caebf2a403cfca593cc9b982e928c33, not stripped
LeCom wrote:
Can't you simply use 32 bit emulation until then? It's not like my stuff is the only software that isn't 64-bit-ready.
Technically I could:
Code: Select all
$ xbps-query -Rs python-32bit
...
[-] python-32bit-2.7.11_9             Interpreted, interactive, object-oriented programming language (32bit)
But I'd rather fix the issue so I don't have to use it.
Komrade Ivan
Deuce
Posts: 8
Joined: Sat Sep 10, 2016 11:12 pm


I'm confused. Why does it crash on startup? Am I supposed to do something fancy to get it to run?
1066 posts Page 71 of 72
Return to “Noteworthy Picks”

Who is online

Users browsing this forum: No registered users and 1 guest