Blzut3's Weblog

Weblog ECWolf ECWolf Wiki
News

ECWolf migrated to Git 2019-09-09 00:09:38

Following the news that Bitbucket will drop Mercurial support in 2020, I have migrated the repositories to Git. Just in case they're needed for historical reference, the old Mercurial repositories will remain with their "-hg" suffix until they're automatically deleted. Ultimately nothing should be lost since the hg-git conversion is generally pretty solid (Zandronum has been using it to sync with GZDoom), and I even wrote a script to migrate the subrepositories.

In the end this will likely be a good thing for the project since as much as I loved using Mercurial, the reality is that Git is what everyone knows and bad translation between Git and Hg has been a bit of a thorn in my side when it came to pull requests (Hg branches are not the same as Git branches). That's even putting aside the fact that Bitbucket's Mercurial support has been somewhat eroding since they added support for Git many years back. Having to no longer deal with feature omissions will be nice for me at least.

Still, I'm going to miss TortoiseHg and subrepo support that works like an end user actually expects. For the former there are a number of GUIs for Git, but so far I haven't found one that doesn't annoy me in some way. Usually the problem is that they focus on stuff that's easy to do on the command line and don't solve the problem I'm actually trying to solve which is browsing changes across history. For the latter Git has submodules, but for whatever reason they never just update the way that the user expects and require a bit of confusing manual intervention at some point or another. I guess the flip side of the way Mercurial does it is now the Hg repos are kind of hard to check out since the subrepos aren't in the location it expects them to be and errors out completely.

Already had a few people ask me why not switch to another host or self host? Essentially Mercurial is losing its equivalent to GitHub, and without it I don't have much hope for the platform to continue thriving. There are a few sites that are trying to cater to those who wish to continue using Mercurial, but I haven't seen one that will have the same "everyone can use it for free" policy that's somewhat essential these days for welcoming new programmers. Self hosting produces a similar problem, but now submitting patches becomes emailing patches. Sure it's decentralized that way, but in my experience most people are resistant to using email these days.

Then the question becomes "why even stick with Bitbucket?" Honestly I'm not sure there are any particularly great reasons, but the two that keep me inclined to stay are that I like how they have an organization feature for repos. Unlike GitHub where, yes you can have a team account, but within that team it's still a flat list ordered by modification time. Additionally, I was using their issue tracker for ECWolf and I was able to keep that by staying there. There is the downside in that GitHub is more popular, but I don't think having an Atlassian account (or sending an email) is that big of a hurdle for people to contribute.

New development builds server 2018-11-13 04:44:03

For those who aren't familiar I've been building most of the binaries on the DRD Team devbuilds site. How that came about is that people were requesting ECWolf development builds, but no one was volunteering to do them so I just setup some scripts and a VM on my Mac and it has been building (mostly) nightly for several years now. Since I already expended the effort to setup an automatic build for ECWolf I ended up absorbing most of the other builds as well.

The build Mac was a base model late 2009 Mac Mini. Shortly after it took on the build server role I upgraded it with 8GB of memory and a 1TB SSHD. Which sped things up a lot, but of course there's only so much a mobile Core 2 Duo can do. Fast forward the end of 2016 and with the release of Mac OS X 10.12 and even though the late 2009 Macbook with identical specs to the Mac Mini is supported, I end up stuck on 10.11 (yes I'm aware there are work arounds to install even Mojave on it). Between the Core 2 just not cutting it anymore by my standards and it no longer being supported by Apple it was time for a new machine, but of course this was right at the time Apple was forgetting that they even made Macs.

Now while I realize it might be a tad ironic since I buy retro computer hardware all the time, I of course couldn't bring myself to buy a 2 year old computer much less a 4 year old one. That's even ignoring the fact that the 2012 models were higher specced. But here we are in November of 2018 and Apple finally released a new Mac Mini, with non-soldered memory no less! So of course I bought one. For those who want to know I got the i7-8700B with 1TB storage, 10G Ethernet (since my NAS and main desktop are 10G), and of course 8GB of RAM since I can upgrade that later.

Here we are 9 years later and once again paying almost as much for the Mac as my desktop and getting 1/3 the power.

Now that I get to basically start over from scratch I'm taking some time to make my build scripts a little more generic. Over the years I've had people ask to see my scripts (mostly since they wanted to help fix some build issues), which wasn't really useful since they were just basic "make it work" style throw away code. This time around I want to get something I can throw up on github, not because they'll be particularly reusable by anyone but rather to provide a way to make what's actually happening more transparent.

I haven't talked about the elephant in the room yet though: Windows. When I initially setup the build server in 2012 I decided to use Windows Server 2012. The primary reason for this was I was hoping to use the server core mode to reduce memory usage, on top of Windows 8 seemingly using less memory than Windows 7. This was especially important since at the time the Mac only had 2GB of RAM. This was fine until Visual Studio 2017 came along and required Windows 8.1. What most people don't know is that Windows 8.0 and 8.1 while treated as the same OS by Microsoft are treated as separate operating systems when we're talking 2012 vs 2012R2. In other words both 2012 and 2012R2 are supported for the same period of time and there's no free upgrade between them. So everyone that was using 2012 for build servers got screwed.

Due to how slow the 2009 Mac Mini was, the fact that the builds were working, and laziness I didn't bother doing anything about it. But now that I have a new machine I need to figure out what I'm going to do for the Windows VM. I would like to try server core again given that Windows containers are a thing, we now have an SSH server on Windows, and I would get to control when updates are applied. But after being screwed over with new Visual Studio versions requiring the "free upgrades" to be applied I'm not sure this route is viable especially given the cost of Windows Server. Might end up just having to use Windows 10, which I don't really have a problem with except that the VM will be suspended most of the time so the auto updates could pose some problems with the builds reliably kicking off at the scheduled time. Still not sure what to do here.

Anyway, with all of that said to pad out this post so that it's more than a few sentences. If anyone was wondering, it's not particularly difficult to setup a PowerPC cross compiler on Mojave. Which actually surprised me since my previous setup involved using compiling and moving an install over from Snow Leopard, but it seems that the XcodeLegacy scripts got a little less quirky since the last time I ran them. Right now it's GCC 6, since that's what I had working on my PowerMac, but I've heard newer does work. Just need to try it, confirm that it works, and update the article. Update 2018/12/10: Confirmed GCC 8.2 is working!

AMD K6-2+/K6-III+ freezing in Quake 2018-04-22 02:54:07

Recently was helping my brother with a K6-III+ build for DOS gaming and had to debug a rather odd issue. When enabling write combining Quake would freeze or sometimes crash quickly when running in high resolutions (the few other games I tested seemed to be fine). Didn't matter if the CPU was running at 200MHz, underclocked FSB, you name it. The board I was using is a Biostar M5ALA Rev 1.1 with the ALi Aladdin V chipset, which is supposedly the more stable of the Super Socket 7 chipsets. I did eventually figure out that disabling the L2 and L3 caches or using a K6-2 (non-plus) would make the game stable, but also using a SiS530 based PCChips board (probably the cheapest board I've ever used) would be perfectly stable as well ruling out issues with the CPU.

However it turns out the real issue was much simpler than a hardware issue. Running speedsys produced a rather anomalous result for memory writes.

I initially ignored the result, but after spending a few hours with a friend trying to figure out what could be wrong with k6wcx, we tried installing k6dos.sys. Not only did this boost frame rates well beyond what k6wcx was doing (90+fps in 320x200, 30+fps in 640x480 vs about 70 and 20 respectively), but Quake was suddenly stable. Furthermore setting the memory regions with k6wcx gave an additional boost in speed without issues! But more importantly the write test were correct in speedsys:

So it appears that the M5ALA sets the cache to write-through instead of the generally preferable write back! It also seems that write through doesn't agree with write combining.

New server plus some odds and ends 2018-02-04 06:58:52

I have finally purchased my own hosting and moved this site off MancuNET. MancuNET has been great for the last 5 years, but between having a full time job and all the sales of S3DNA I figured it was quite silly to be using someone elses resources. Plus I started using Matrix and due to some complexities in regards to SSL certs running things myself just proved to be easier. (For those who aren't aware of Matrix, it's a fully open and federated communication platform. Similar to Discord or Slack in feature set, just 100% open.)

While not really related, I also decided to move ECWolf's repositories over to a Bitbucket team. When I initially move ECWolf from SVN over to Merurial Bitbucket didn't have the teams feature so I had the repo under my name. While I'm still largely the sole contributor to ECWolf I still figured it's better to get the transition out of the way. The big benefit is that all the ECWolf related repos are more discoverable since they won't get buried under any other projects I may be working on. The downside is everyone needs to update their clones to point to the new location.

ECWolf also has a new logo/icon courtesy of NeuralStunner. The hope here is that the new icon still feels related to Wolfenstein while being freely licensable.

Suppose this is as good of time as any to formally talk about my plans for ECWolf 1.4. I think it's clear as day to anyone that I perhaps scoped too much into the 1.4 release. At least these days where I don't have quite as much time to work on hobby programming as I used to. In order to narrow the focus I have decided to drop Mac Wolfenstein support from the 1.4 goals. To be clear this means nothing long term as ECWolf will get Mac Wolf support, but I personally would rather focus on finishing the initial multiplayer code for 1.4.

On the topic of refocusing, I would like to formally say what everyone else has no doubt figured out before me: I no longer have time to actively participate in GZDoom/Zandronum development. I know, real shocker here but making this formal really does lift a lot mentally. I've definitely thought way too much about the shared code in ECWolf since at this point my contribution rate really is that of any other third party contributor. I will of course still be doing the builds that I have been doing so this really is just a statement for my own sake since as far as everyone else is concerned things are continuing as normal.

Now while things have been quite slow, things haven't been exactly dormant over the last year either. A few months ago I had a random epiphany that adding more Z height support would actually be quite easy to do, so now ECWolf can render things at varying heights. Admittedly it's not especially useful right now, but its a step in the right direction and it makes for more interesting screen shots than the feature actually is.

Sorry for the lack of content here in 2017. Last year has been the year of projects that get stopped just before their complete for one reason or another. I wanted to write about the video capture card I bought, but then I discovered my computer was bottlenecking its capabilities so it wouldn't be fair to the reader to speculate. I had hoped to write about my over the top retro computing build, but that still has a lot of small details to finish out. I finally purchased a new primary computer and the fun story I planned to write in regards to that got delayed by early adopter issues. For those wondering I'm now running a Threadripper 1950X. Quite an amazing system for development and brings my ECWolf compile times down to 2 seconds. Not that they were particularly long before, but even everything else I work on now has negligible compile times compared to the Lynnfield/Nehalem system I had before. So glad to finally be beyond quad core processors!

Oh and I also now own ecwolf.org which redirects to any page in the ecwolf section of this site. Don't have any immediate plans to host the site under the domain.

1 2 3 4 5 6 720 21 »

 

© 2004-2019 Braden "Blzut3" Obrzut