Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. Vervecraft is a Minecraft community targeted for mature & laid-back players without needing a strict rules. Features inside the server: Server is 1.19.2 Single Player Sleep in Multiplayer Ender Dragon will drop Elytra Player-based economy, trading items with items. Diamond is generally the server's economy Creeper explosion is disabled because I hate creepers Phantoms are disabled in-game Creeper drops phantom membrane since they're disabled Land claiming; used for grief prevention. The command to claim lands is called "/claim" A role called Regular where you can earn it within 8 hours, you can build around the spawn island to make it as a town. You can use bonemeal on sugar cane. May come with a cost of the amount, though... be careful! Nitro Boosters can have its own custom tags More to come.... You can contribute in a lot of ways with the Minecraft community. Bring your friends, report any bugs, make friends, have it drama-free, basically just having fun. That's the point of being laid-back. Just... don't be annoyingly immature. LINKS Discord: https://discord.gg/CfMrcDYgEw Community: https://moddingcommunity.com/forums/community/5-vervecraft/
  3. Last week
  4. This was the second Reddit thread I made back in 2016 when trying to address issues to Valve regarding modding support in CS:GO. Sadly they didn't reply or reach back out, but the Reddit thread did receive a lot of positive attention! Valve's Poor Support For Community Servers & Lack Of Communication (2nd Attempt) ”I’ve seen this thread before?” There is a chance you’ve seen this thread before on this subreddit. On August 19th, a thread with very similar contents as this one was posted. However, the thread was deleted by a malicious user (I’ve secured my account since). Therefore, I am reposting the thread with adjustments. Hello, everyone. I would like to address the current state of Valve’s support for community servers. I’ve tried this before and the thread I made did succeed (thank you, r/GlobalOffensive!). However, we never received a response by Valve. A majority of these topics have been brought up before but sadly went unnoticed by Valve. I am hoping Valve can give some sort of response this time. In the end, it is worth a try! Last Reddit Post The last Reddit post about community support can be found here. Topics Servers Running Content Against Valve’s Server Guidelines. Valve’s Communication Between Server Operators. Consider A Beta For Source 2. The Current State Of The Server Browser. Other Small Things. TL;DR Basically, servers are running plugins against Valve’s server guidelines. These plugins are giving the servers a big advantage over servers which are obeying the guidelines. Valve isn’t doing enough to eliminate the servers disobeying the guidelines either. Next, Valve’s communication between the server operators is also poor. Server operators would appreciate it if the CS:GO developers at least started posting on the mailing list again after each update (with patch notes) like the TF2 team does at the moment. That said, there are many other areas Valve can improve communication in. Furthermore, the community server browser is also highly outdated and broken; I believe a Garry’s Mod-like server browser would fit very well with CS:GO. I also believe we need to give non-empty servers an advantage over empty servers. Other small things include finding a fix to the current Linux limitation in CS:GO servers, allowing players to gain XP in community servers, allowing community servers to participate in the Quick Play pool, start to patch exploits more quickly, and my opinion on why Valve isn’t pushing for 128 tick servers. Servers Running Content Against Valve’s Server Guidelines As a some of you are aware, there are many servers running plugins against Valve’s server guidelines. These specific plugins give players knives from the CS:GO market, weapon paints from the CS:GO market, etc. Although a majority of the player base enjoys the free content, it is killing communities obeying Valve’s guidelines. A few of you may just say “Well, these communities should just put the plugins on their servers to compete”. There are a few valid reasons why communities shouldn’t be running these plugins. Firstly, it is wrong. Although Valve doesn’t appear to care about community servers (from my view), it is still wrong. Secondly, you are just hurting other communities who want to obey the server guidelines. Thirdly, I had custom weapon models (not sold on the CS:GO market) on my servers in the past and suffered from two bans and I’ll be honest, it is not fun moving the servers onto a new account. In fact, I’ve spent around one - two hours moving them after each ban I received (remember, these bans come with no details and I had no clue which server was triggering the ban). Valve has taken action in the past to eliminate and ban servers going against their guidelines, but it hasn’t been enough. There are still plenty of servers out there with these plugins and they’re striving better than ever. Here are some solutions Valve could use to eliminate these issues. Make The Ban Waves More Frequent This is the most viable option because it will eliminate servers intentionally going against Valve’s guidelines. Currently, ban waves aren’t consistent. A few weeks ago, I made a thread on the mailing list that addressed the inconsistency with server bans. After this thread was made, I saw three ban waves in three days (once per day). Afterwards, it stopped for a week or two. In order for this ban system to work out, Valve needs to start issuing more bans and stay consistent. Prevent Servers From Giving Banned Content This is also a great option. As far as I know, TF2 does this and it works out great. Basically just prevent servers from giving banned content (e.g. knives, etc). A server modification called SourceMod already has a feature that prevents servers from executing code that triggers a server ban. If the SourceMod developers can do this, the CS:GO developers definitely can too. Remove The Ban On The Plugins This would allow servers to compete on an even playing field again. Although, Valve is actually making money each time they have ban waves. I doubt this will happen. To sum up, many great communities are suffering because they are obeying Valve’s guidelines. From what I’ve seen, many of these servers running the banned plugins also offer a poor environment for their players. For example, I’ve seen servers that are running these plugins spam MOTD Ads mid-round (every minute). Yet, the server is in the top ten on GameTracker (server ranking website). My point is, there are many bad servers striving due to nothing being done about these plugins. Valve’s Communication Between Server Operators The next thing I want to talk about is Valve’s communication between server operators. Currently, I believe it is poor. Valve rarely communicates with server operators. For example, the CS:GO developers have stopped posting on the mailing list after releasing each update. At one point, they would post on the csgo_servers mailing list after releasing each update. These posts would include all of the relevant update notes. This generally was great because server operators subscribed to the mailing list would know when updates were released via e-mail. Despite that, the TF2 developers still post on the mailing list after releasing each TF2 update. There are other areas where Valve can communicate better. Another example was when they started to enforce the server guidelines. Valve never made it clear whether custom weapon models are allowed or not. This question is still unanswered to this day. Communication between Valve and the community is important. Although I have seen an improvement recently, it is still very poor. Consider A Beta For Source 2 A hot topic recently discussed was when/if CS:GO will move to the Source 2 engine. Personally, I believe it will. However, I haven’t seen any solid proof yet. If CS:GO is moving to the Source 2 engine, I would like to see an open beta beforehand. This will give people time to point out bugs, etc. in CS:GO as a whole. As for community servers, if there’s no beta or communication beforehand, this move will most likely be a disaster for server operators. I highly hope Valve considers an open beta before moving CS:GO to the Source 2 engine. Current State Of The Server Browser The current state of the CS:GO server browser is poor. The server browser is highly outdated and doesn’t even match CS:GO’s color scheme. Moreover, I’ve seen some reports that players still only see around 100-200 servers in the server browser which should have been fixed more than a year ago. In my other Reddit thread, I suggested completely remaking the server browser and perhaps making it more like the Garry’s Mod server browser. It would be nice to see the server browser organized into groups for each game mode like the Garry’s Mod server browser. Of course, an option to list “all” community servers should be available as well. I still feel this is the best direction to go. That said, I want to give another suggestion for the community server browser. After doing some testing with the Steam Master Server API, I’ve discovered there are at least 20000 empty community servers and around 11,000 community servers with at least one player (average). This does not include Valve’s official servers. Sadly, the cap on the amount of servers I can retrieve from the API is 20000. This means there are more than 20000 community servers being displayed on the server browser which are empty and only ~11,000 (peak) non-empty servers. I think it would be best to give non-empty servers an advantage on the server browser. For example, have the “Has players playing” filter checked by default. Although I can understand server operators being upset over this, I still feel it would be the best option. In the end, server operators should have a player base trying to populate their servers. As of right now, I can get five to ten players on my servers but it won’t go above twelve players. I feel the server browser being cluttered with empty servers isn’t helping. If and when CS:GO moves to the Source 2 engine, I highly hope they remake the server browser and that it is done well. You can read my other Reddit post (linked at the top) for more details about the community server browser. Other Small Things There are a few other small things I want to talk about. This includes suggestions, issues, etc. Linux Limitation As mentioned in my other Reddit post, there is currently a Linux limitation in CS:GO affecting servers; CS:GO Linux servers don’t appear to use the networking thread while CS:GO Windows servers do. From my understanding, the extra thread (AKA networking thread) handles the networking CPU load from the CS:GO server. This makes a huge difference on larger CS:GO servers (basically, think of the server being able to use 150% CPU with the networking thread instead of 100% CPU). Valve has been aware of this issue for quite some time now. However, the issue is still not fixed. While I do understand the CS:GO developers are unsure what could be causing the limitation, I would appreciate it if communication was made about the issue. Hopefully this can be fixed sometime in the future. Many server operators want to run CS:GO servers on Linux but sadly can’t due to this limitation (unless you want terrible server performance). XP And Quick Play For Community Servers Currently, official player XP and Quick Play are disabled for community servers. I believe players should be able to earn at least some XP on community servers and community servers should be put into the Quick Play pool (only servers running official maps). While I do understand why Valve doesn’t want players receiving XP in community servers, it is drawing many players away from community servers. The biggest issue I see from allowing players to gain XP on community servers would be bad server operators intentionally making servers for players to farm XP in. Personally, I’m unsure about the best way to prevent the system from being abused by bad server operators, but I do have suggestions. Here are my suggestions: Limit XP For Players While Playing On Community Servers Although server operators may be against this, I believe this would help against servers attempting to farm XP. XP can be limited by player or server. For example, if XP is limited by the server, the server can only give x amount of XP daily to players. Track XP By Server If Valve decides to track XP on community servers by server, I feel more options would open up. For example, if a server is caught farming XP and banned, all the XP given to players by that server would be removed. Although, I can definitely see players being upset over this. For example, if a player joined the server while the server wasn’t farming XP, they could lose all the XP they’ve gained when the server was banned later. That said, perhaps Valve could see when the server started farming XP and remove all player’s XP from and to specific dates. This would obviously take some work but I feel this would be the best option. Community Servers Have A Different XP System I’m not sure if this is even an option and I can see many people disagreeing with this. However, I just wanted to throw this out there anyway. Community servers could have their own XP system. For example, under the official XP bar, there could be “Community XP” bar which is used for community servers. There are definitely ways to allow players to earn XP on community servers without the system being abused, but it will require work. I hope the CS:GO developers at least consider this feature as it will help community servers out a lot. I also believe community servers should be in the Quick Play pool (only servers running official maps). For a while, community servers were put in the Quick Play pool but were taken out a couple years ago after an update. Server operators trying to run stock servers likely stand no chance against Valve’s official servers since they have a big advantage (quick play + official player XP). There are servers out there that offer an arguably better gaming experience than Valve servers (e.g. 128 tick and no lag), but are sadly empty most of the time. Naturally, players should be able to easily blacklist servers they do not like. Once the server is blacklisted, they will not be put into that community server again through Quick Play. Perhaps add another option “Blacklist all servers under this account” which will blacklist all servers under the same Steam account as the original server you blacklisted. For example, if a server operator put up sixteen DeathMatch servers where the server performance is terrible and MOTD Ads spam every minute, of course somebody is going to want to blacklist the server. However, there is a chance they’ll be put into one of the other servers they have. Therefore, adding an option to blacklist all servers under the specific account will save the player time. Recently Patched Exploit On July 27th, 2016 an exploit was officially patched that prevented players from intentionally crashing community servers. While I do appreciate Valve officially patching this exploit, it took them over a month to do so. The first time I saw the exploit reported was on June 20th. It was reported to Valve multiple times since then and they still took over a month to eventually fix it. This, in my opinion, is ridiculous. Thankfully, the modding community was able to quickly write a SourceMod extension to patch the exploit. However, this extension was somewhat hidden in a thread on the SourceMod forum (AlliedModders). There were still server operators completely unaware of the extension that patched the exploit and constantly asking why their server was getting attacked. Valve should make these type of things their main priority. In my opinion, exploits shouldn’t exist any longer than a week, let alone over a month. I hope Valve realizes how bad this makes them look and starts to improve in the future. 128 Tick For Valve Official Servers The next thing I want to talk about isn’t a suggestion or issue; I simply want to let everybody know why I think Valve isn’t pushing for 128 tick servers. Running 128 tick is very heavy on the server machine’s CPU. In fact, I believe 128 tick takes around 2 - 3X more CPU than 64 tick (for servers, not players). I would imagine this would cost a lot of money for Valve due to machine costs and such, although Valve should have the money for it. Personally, I think that is the main reason they aren’t pushing for 128 tick. It may sound obvious but it seems like a lot of players are expecting them to move to 128 tick. As of right now, Valve does not appear to want to spend money on something they don’t “need”. Conclusion To sum up, CS:GO’s current state regarding community servers is poor. From my view as of right now, the CS:GO developers do not seem to care about community servers. I am aware that they are simply following basic business acumen as a corporation should, but I think it would be in the best interest of everyone if Valve offered more community support. Of course, a majority of these issues may be fixed once the Source 2 engine is released. However, Valve hasn’t communicated about any of them. I am therefore assuming the worse outcome. I truly hope Valve does start to improve in the future. Not only are community servers suffering, but the game as a whole appears to be as well (many issues going unnoticed, etc.). If you have other suggestions for CS:GO, feel free to post them in this thread! There are still many issues that need to be fixed. I wrote about the ones that are the most important to community servers from my view. Thank you for reading. Reddit Thread
  5. This was the first Reddit thread I made back in 2016 when trying to address issues to Valve regarding modding support in CS:GO. Sadly they didn't reply or reach back out, but the Reddit thread did receive a lot of positive attention! Valve's Poor Support For Community Servers And Lack Of Communication Hello everybody, I would like to address a few ideas/suggestions for CS:GO along with discussing the current state with community support. Recently, I have been thinking about how to improve CS:GO. However, we first need to list the current issues with Valve’s support for community servers. Custom Weapon Model Bans & GSLT System In the last couple months, Valve started banning CS:GO game servers if they violated their guidelines posted here. Back in the summer of 2015, we did provide stock CS:GO knives. At the time, this was allowed and it made players happy. When Valve notified all the server owners on the CS:GO servers mailing list that these plugins will result in a GSLT ban, we immediately removed the specific knife plugins. However, a few months later, somebody developed a plugin that gave players custom weapon models (not sold on the CS:GO market). Seeing this opened up a lot of potential and customization in the game and thus we added this plugin to our servers. Originally, we thought this wasn’t against the guidelines (it shouldn’t be). Nonetheless, it sadly is. Valve, why is it against the rules to provide players with custom knives not sold on the CS:GO market? I can understand the stock CS:GO knives being against the rules, but custom weapon models? These should definitely be allowed, especially when Valve’s public image is a company that highly supports community-made content. Currently, Valve’s public image is highly false and misleading to me. With that said, the banning system they have developed is poorly made as well. Honestly, they most likely spent a total of two hours developing this system. Here is a list of things wrong with the banning system itself: Doesn’t tell you which server triggered the detection and the reason for the ban. Permanently bans your account from hosting servers again. The first ban should definitely be temporary. No warnings ahead of time. Finally, the CS:GO developers themselves failed to communicate properly on the subject. Important questions about this system are still unanswered to this day. Overall, this system is a joke in my eyes and it is an embarrassment to Valve. Server owners definitely deserve better. Exploits Left Unpatched For Months Recently, there have been occasional server exploits going around. A specific exploit existed for around two months and was mentioned to the CS:GO developers multiple times. Moreover, it wasn’t patched until a couple months after the original report. This isn’t the first Valve game that delayed exploit patches. Older Valve games had exploits that were left unpatched for months and possibly years after the original report. Most of the time, exploits are patched when it starts affecting Valve’s servers or becomes a common thing (e.g. Popular Reddit thread). To me and many others, that is ridiculous. Security is very important and when exploits are left unpatched for months without any communication from Valve, there’s definitely a major problem. Linux Limitation & Other Small Things I’ve e-mailed Valve multiple times about a Linux limitation which highly decreases server performance on popular CS:GO Linux servers. Somewhat expectedly, I received no response as usual. There have been other things I and many others have suggested to Valve. It goes without saying that our thoughts have been ignored. Conclusion To conclude, I believe Valve’s support for community servers is at an all-time low and yet continues to wane. At this point, I feel as though Valve only cares about the amount of money they’re making from their games. Ideas Now that my rant is over, it is time to start talking about some great ideas that will improve CS:GO. The ideas will be listed below. Server Browser I believe a new and improved server browser would really help out community servers in CS:GO. The current server browser appears to be heavily outdated and doesn’t match the CS:GO color theme. Currently, the server browser is gray which doesn’t match the blue-ish main menu theme. I also believe that making two separate layouts for the server browser would make it feel more modern. The two layouts will be listed below. Complex Layout In the complex layout, community game modes will be listed. When you click on one of said game modes, the menu will expand to show all the servers running the specific game mode. Some examples of game modes include: Zombie Escape, Zombie Mod, Surf Timer, Bunny Hop, etc. This would be similar to the Garry’s Mod server browser. However, instead of relying on a text file for the game mode, it would depend on the map prefix. Ze_ - Zombie Escape Zm_ - Zombie Mod Surf_ - Surf Timer or Surf Deathmatch Bhop_ - Bunny Hop De_ - Defuse (or whatever you call it). Cs_ - Hostages. Etc… Simple Layout The simple layout would basically work as the current server browser, although the color scheme and style would need to be changed to fit CS:GO’s. There is currently also a ~5000 cap on the amount of servers the server browser can display. On paper, this does sound relatively high. That being said, there are around ~50K community servers. I believe uncapping the ~5000 cap would be beneficial and would increase the chances of the players seeing every server on the server browser (as it should be). To conclude, a new and improved server browser would be a big step in strengthening the support for community servers. This idea has been proposed before, but as usual, nothing has been done by the CS:GO developers. We are currently looking into making the server browser using HTML, JavaScript, etc. If you are interested in helping us in this project, please reply to this thread!. Quick Play For Community Servers First, quick play is the traffic from players using the “Find a game” option in CS:GO (which a majority of the player base chiefly uses). The next thing I want to talk about is quick play support for community servers. Currently, community servers do not receive traffic from the quick play system. Even if your server runs vanilla game play and even a better gaming experience than Valve (e.g. 128 tick), you can only rely on players finding your server through the server browser, friends list, or connecting through the console. From what I’ve heard, an option to be put into community servers using quick play did exist initially, although it has since been removed. I am estimating around 85% of the CS:GO player base to only be using the quick play system (most likely haven’t discovered the server browser). With that said, I would like to address another concern for community servers. Currently, players do not gain XP while playing on community servers. This also serves to drive players away from community servers. The only valid reason I can think of not allowing players to gain XP on community servers is the possibility of farming XP. However, farming XP is difficult and considered pointless due to CS:GO’s current way of handling XP (e.g. XP is already limited and the more the player plays CS:GO, the less XP they will gain). Even so, simply enabling XP on community servers running stock maps would aid said community servers. Small Issues I want to address a few small issues that I believe would strengthen CS:GO upon being fixed. These are mostly issues that have existed for a long time (I have sent Valve most of these issues in the past, though, no results). Player Names Not Showing While Aiming On Large Servers When you aim at players, it should display their name (just like the old CS games). However, in CS:GO, this feature is broken for servers with above 32 players. It doesn’t matter what mp_playerid is set to; the feature eventually breaks. This appears to only break with teammates. Feel free to watch this YouTube video to see the issue itself. Linux Limitation As mentioned in the past, CS:GO linux servers perform poorly, especially for large servers. This is due to CS:GO linux servers not using the networking thread like Windows does. More information can be found in this mailing list thread. Many server owners would prefer to use Linux instead of Windows (me included) for personal reasons. With the aforementioned limitation, most cannot use Linux if they are hosting large servers due to the bad performance. New Skeleton/Hitboxes Performance Issues To be honest, I cannot confirm this is the issue because I am not a modeler myself. However, a couple of modelers I have spoken to said that player models compiled with the new skeletons/hit boxes do decrease performance on large servers especially when these newer models are taking damage (bullets penetrating the player). I’ve tried testing this myself, although I cannot find a difference when testing with bots. Though, when we replaced our player models compiled with the new skeletons/hitboxes with player models compiled with the older skeletons/hitboxes, server performance increased by 15%. TL;DR Valve’s support for community servers is at an all-time low and still decreasing. Since Valve is apparently listening now, I just wanted to throw out suggestions and ideas that would most likely improve CS:GO. I hope we can get some constructive discussions going, and hopefully get a response from Valve, so that we may see a better community experience. Remember, the game developers aren’t the only ones to blame in Valve for this mess. Feel free to give feedback (e.g. better ways to improve the server browser, etc)! Thank you for reading. Reddit Thread
  6. I'll likely be making one under the discussions category. We can also add a GFX section to each modding sub-forum, but I intended to use the "Media and Video" forum for that. Thank you for the feedback
  7. Welcome Deadly virus and nice work :)
  8. Would be great if we had any GFX Section with the sub boards : General Discussion, Showcase(to showoff our creations<<Handmade or with programs>>) Tools & Help, & Request(s)(in this one the members will request which design they want to own from us.
  9. Thank you so much, if you want i can help out to the designing part for the community :)
  10. Welcome and your designs look great
  11. Hello, My name is Raphael Michael (As known As Deadly ViruS) i'm 31st years old, i live in Greece, i'm an employee to a municipal Company but my free time i'm a freelancer designer, I can design whatever the customer wants FOR FREE so if any of you is looking for any designer i'm here to solve all your wonders and make your imaginary design come true! Don't hesitate to pm me anytime, here are my contact infos: Facebook Fan Page: https://facebook.com/volcanoxdesigns Instagram: Rafaelos_1991 Skype: rafael.gewrgalis Discord: ViperZ#7406
  12. General discussion on the project. I'll try to participate best I can.
  13. GitHub source: https://github.com/wtfsystems/wtengine API docs: https://www.wtfsystems.net/docs/wtengine/index.html Allegro: https://liballeg.org
  14. A SourceMod plugin that adds extra CT and T spawn points in Counter-Strike: Source and Counter-Strike: Global Offensive. This is useful for large servers that have to deal with maps with not enough map spawn points. NOTE - When an additional spawn point is being added, it uses the vector and angle from an already existing spawn point for that team. ConVars sm_ESP_spawns_t - Amount of spawn points to enforce on the T team (Default 32). sm_ESP_spawns_ct - Amount of spawn points to enforce on the CT team (Default 32). sm_ESP_teams - Which team to add additional spawn points for. 0 = Disabled, 1 = All Teams, 2 = Terrorist only, 3 = Counter-Terrorist only (Default 1). sm_ESP_course - Whether to enable course mode or not. If 1, when T or CT spawns are at 0, the opposite team will get double the spawn points (Default 1). sm_ESP_debug - Whether to enable debugging (Default 0). sm_ESP_auto - Whether to add spawn points when a ConVar is changed. If 1, will add the spawn points as soon as a ConVar is changed (Default 0). sm_ESP_mapstart_delay - The delay of the timer on map start to add in spawn points (Default 1.0). Commands sm_addspawns - Attempts to add spawn points. sm_getspawncount - Receives the current spawn count on each team. sm_listspawns - Lists the vectors and angles of each spawn point on each team. Please note a client may have issues outputting all of the details into their console. However, using the server console has been very consistent from what I've seen. Installation Copy the compiled ExtraSpawnPoints.smx file into the server's addons/sourcemod/plugins directory. For compiling from source, the source code is available at scripting/ExtraSpawnPoints.sp. To enable the plugin, either restart the map, server, or execute the following SourceMod command. sm plugins load ExtraSpawnPoints Credits @Christian GitHub Repository & Source Code ExtraSpawnPoints.sp ExtraSpawnPoints.smx
  15. Tests I've performed converting C to Assembly. Basically testing performance for code I've made in C and whatnot! A small repository to store my findings with converting C code to Assembly code along with measuring performance between the different clang optimization levels. I'm starting to learn more about Assembly because I want to understand how programs work on a very low level so I can optimize it the best I can. I've made the following source files to test with. Two source files for copying eight bytes of data from one 8-bit array (8 bytes in size) to another. One source file uses a for loop to achieve this while the other uses the native memcpy() function. Two source files for comparing a variable to five values. One source file uses if and else if while the other uses a switch statement. A source file that copies a string and outputs it to stdout. Two source files testing a for loop along with seeing if there's a difference when specifying pragma #unroll x which should unroll the for loop and result in better performance in our case. I'll likely be adding more files to this repository as time goes on. Dumping Assembly Code I used clang to emit LLVM and create the .bc file with no optimizations by the compiler (the -O0 flag). An example may be found below. clang -c -emit-llvm -O0 -o asm/testO2.bc src/test.c Since we emit LLVM, we may use the llc command to dump the Assembly code under specific optimization levels. I dump both the native architecture's Assembly code and also Intel's Assembly code (these Assembly files are appended with _intel). Here's an example using optimization level 2 (notice the -O=2 flag in the llc command). # Native architecture's Assembly code. llc -filetype=asm -O=2 -o asm/testO2.s asm/testO2.bc # Intel Assembly code. llc -filetype=asm -O=2 -o asm/testO2_intel.s --x86-asm-syntax=intel asm/testO2.bc NOTE - I'd recommend using the scripts/genassembly.sh Bash script I made to generate Assembly code under optimization levels 0 (None) - 3 and both non-Intel and Intel architectures. The script only requires one argument which is the name of the source file in src/ without the file extension (.c). Also make sure to modify the ROOTDIR variable if you place the script outside of this repository's scripts/ directory. An example may be found below. ./genassembly.sh pointer Optimization Levels Clang's optimization levels may be found in its manual page (man clang). For reference, here are the levels. Code Generation Options -O0, -O1, -O2, -O3, -Ofast, -Os, -Oz, -Og, -O, -O4 Specify which optimization level to use: -O0 Means “no optimization”: this level compiles the fastest and generates the most debuggable code. -O1 Somewhere between -O0 and -O2. -O2 Moderate level of optimization which enables most opti‐ mizations. -O3 Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster). -Ofast Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict com‐ pliance with language standards. -Os Like -O2 with extra optimizations to reduce code size. -Oz Like -Os (and thus -O2), but reduces code size further. -Og Like -O1. In future versions, this option might disable different optimizations in order to improve debuggability. -O Equivalent to -O2. -O4 and higher Currently equivalent to -O3 You'll notice a lot of optimizations within the Assembly code from -O1 to -O3. System This was all tested on my Linux VM running virtio_net drivers and Ubuntu 20.04 Server. The Linux kernel the tests in asm/ were built with was 5.15.2-051502-generic. Credits @Christian GitHub Repository & Source Code
  16. Examples of C programs using hashing functions from GNOME's GLib library. A repository I'm using to store my progress while learning GNOME's GLib library. Specifically hashing via ghash. Test Files GLib Structures (glib_structs.c) In this test, we have structures as keys and values (all integers). However, the total size for each the key and value exceeds 64-bits. We also use the g_int_hash hashing function (along with g_int_equal) which works fine. By default, the amount of entries it inserts and looks up is 5 (MAX_ENTRIES_DEFAULT). However, the first argument of the program determines how many entries to insert and lookup. Please look at the following example. ./build/glib_structs 20 The above (after building, of course) will output the following. [email protected]:~/glib-tests$ ./build/glib_structs 20 Successfully inserted entry! Key => 0:0:0:0 (0). Val => 0:0:0:0:0:0. Successfully inserted entry! Key => 300:3:600:90 (1). Val => 1:2:3:4:100000:10000000. Successfully inserted entry! Key => 600:6:1200:180 (2). Val => 2:4:6:8:200000:20000000. Successfully inserted entry! Key => 900:9:1800:270 (3). Val => 3:6:9:12:300000:30000000. Successfully inserted entry! Key => 1200:12:2400:360 (4). Val => 4:8:12:16:400000:40000000. Successfully inserted entry! Key => 1500:15:3000:450 (5). Val => 5:10:15:20:500000:50000000. Successfully inserted entry! Key => 1800:18:3600:540 (6). Val => 6:12:18:24:600000:60000000. Successfully inserted entry! Key => 2100:21:4200:630 (7). Val => 7:14:21:28:700000:70000000. Successfully inserted entry! Key => 2400:24:4800:720 (8). Val => 8:16:24:32:800000:80000000. Successfully inserted entry! Key => 2700:27:5400:810 (9). Val => 9:18:27:36:900000:90000000. Successfully inserted entry! Key => 3000:30:6000:900 (10). Val => 10:20:30:40:1000000:100000000. Successfully inserted entry! Key => 3300:33:6600:990 (11). Val => 11:22:33:44:1100000:110000000. Successfully inserted entry! Key => 3600:36:7200:1080 (12). Val => 12:24:36:48:1200000:120000000. Successfully inserted entry! Key => 3900:39:7800:1170 (13). Val => 13:26:39:52:1300000:130000000. Successfully inserted entry! Key => 4200:42:8400:1260 (14). Val => 14:28:42:56:1400000:140000000. Successfully inserted entry! Key => 4500:45:9000:1350 (15). Val => 15:30:45:60:1500000:150000000. Successfully inserted entry! Key => 4800:48:9600:1440 (16). Val => 16:32:48:64:1600000:160000000. Successfully inserted entry! Key => 5100:51:10200:1530 (17). Val => 17:34:51:68:1700000:170000000. Successfully inserted entry! Key => 5400:54:10800:1620 (18). Val => 18:36:54:72:1800000:180000000. Successfully inserted entry! Key => 5700:57:11400:1710 (19). Val => 19:38:57:76:1900000:190000000. Size of table is now 20 (entries). Lookup successful! Key => 0:0:0:0 (0). Val => 0:0:0:0:0:0. Lookup successful! Key => 300:3:600:90 (1). Val => 1:2:3:4:100000:10000000. Lookup successful! Key => 600:6:1200:180 (2). Val => 2:4:6:8:200000:20000000. Lookup successful! Key => 900:9:1800:270 (3). Val => 3:6:9:12:300000:30000000. Lookup successful! Key => 1200:12:2400:360 (4). Val => 4:8:12:16:400000:40000000. Lookup successful! Key => 1500:15:3000:450 (5). Val => 5:10:15:20:500000:50000000. Lookup successful! Key => 1800:18:3600:540 (6). Val => 6:12:18:24:600000:60000000. Lookup successful! Key => 2100:21:4200:630 (7). Val => 7:14:21:28:700000:70000000. Lookup successful! Key => 2400:24:4800:720 (8). Val => 8:16:24:32:800000:80000000. Lookup successful! Key => 2700:27:5400:810 (9). Val => 9:18:27:36:900000:90000000. Lookup successful! Key => 3000:30:6000:900 (10). Val => 10:20:30:40:1000000:100000000. Lookup successful! Key => 3300:33:6600:990 (11). Val => 11:22:33:44:1100000:110000000. Lookup successful! Key => 3600:36:7200:1080 (12). Val => 12:24:36:48:1200000:120000000. Lookup successful! Key => 3900:39:7800:1170 (13). Val => 13:26:39:52:1300000:130000000. Lookup successful! Key => 4200:42:8400:1260 (14). Val => 14:28:42:56:1400000:140000000. Lookup successful! Key => 4500:45:9000:1350 (15). Val => 15:30:45:60:1500000:150000000. Lookup successful! Key => 4800:48:9600:1440 (16). Val => 16:32:48:64:1600000:160000000. Lookup successful! Key => 5100:51:10200:1530 (17). Val => 17:34:51:68:1700000:170000000. Lookup successful! Key => 5400:54:10800:1620 (18). Val => 18:36:54:72:1800000:180000000. Lookup successful! Key => 5700:57:11400:1710 (19). Val => 19:38:57:76:1900000:190000000. Building You may use git and make to build this project. You will also need clang and libglib2.0. The following will do for Debian/Ubuntu systems. # Run apt update as root. sudo apt update # Install Make and build essentials. sudo apt install build-essential # Install Clang if it isn't installed already. sudo apt install clang # Install GLib 2.0 along with pkg-config (which is used for obtaining GLib's include paths and linker libraries). sudo apt install libglib2.0 pkg-config # Clone the repository. git clone https://github.com/gamemann/GLib-Tests.git # Change directory to GLib-Tests/. cd GLib-Tests/ # Run Make which will output executables into the build/ directory. make Credits @Christian GitHub Repository & Source Code
  17. A small project that allows you to gather statistics (integers/counts) from files on the file system. This was designed for Linux. This is useful for retrieving the incoming/outgoing packets per second or incoming/outgoing bytes per second on a network interface. Building Program You can simply use make to build this program. The Makefile uses clang to compile the program. # (Debian/Ubuntu-based systems) apt-get install clang # (CentOS/Others) yum install devtoolset-7 llvm-toolset-7 llvm-toolset-7-clang-analyzer llvm-toolset-7-clang-tools-extra # Build the project. make You may use make install to copy the gstat executable to your $PATH via /usr/bin. Note - We use gstat instead of stat due to other common packages. Command Line Usage General command line usage can be found below. gstat [-i <interface> --pps --bps --path <path> -c <"kbps" or "mbps" or "gbps"> --custom <integer>] --pps => Set path to RX packet path. --bps => Set path to RX byte path. -p --path => Use count (integer) from a given path on file system. -i --dev => The name of the interface to use when setting --pps or --bps. -c --convert => Convert to either "kbps", "mbps", or "gbps". --custom => Divides the count value by this much before outputting to stdout. --interval => Use this interval (in microseconds) instead of one second. --count -n => Maximum amount of times to request the counter before stopping program (0 = no limit). --time -t => Time limit (in seconds) before stopping program (0 = no limit). Note - If you want to receive another counter such as outgoing (TX) packets, you can set the file to pull the count from with the -p (or --path) flag. For example. gstat --path /sys/class/net/ens18/statistics/tx_packets Credits @Christian GitHub Repository & Source Code
  18. A program that calculates packet stats inside of an XDP program (support for both dropping and TX'ing the packet). As of right now, the stats are just the amount of packets and bytes (including per second). The stats are calculated to UDP packets with the destination port 27015 by default. You may adjust the port inside of src/include.h. If you comment out the TARGETPORT define with //, it will calculate stats for packets on all ports. Command Line Options There are two command line options for this program which may be found below. -i --interface => The interface name to attempt to attach the XDP program to (required). -t --time => How long to run the program for in seconds. -x --afxdp => Calculate packet counters inside of an AF_XDP program and drop or TX them. -r --tx => TX the packet instead of dropping it (supports both XDP and AF_XDP). -c --cores => If AF_XDP is specified, use this flag to override how many threads/AF_XDP sockets are spun up (keep in mind this should be the amount of RX queue you have since these bind to an individual RX queue). -s --skb => Force SKB mode. -o --offload => Try loading the XDP program in offload mode. TX Modes There are two modes and they must be adjusted inside of the source file. By default, an FIB lookup is performed inside of the XDP program and if a match is found, it will TX the packet + update the stats in the raw XDP or AF_XDP programs. Otherwise, the packet is dropped. The second mode simply switches the ethernet header's source and destination MAC address and TX's the packet back out. For performance reasons, I didn't include it as a command line option. Instead, you will need to go to src/xdp/raw_xdp_tx.c (for raw XDP) or src/af_xdp/raw_xdp_tx.c and comment out the #define FIBLOOKUP line by adding // in-front. For example: //#define FIBLOOKUP Building You may use the following to build the program. # Clone the repository and libbpf (with the --recursive flag). git clone --recursive https://github.com/gamemann/XDP-Stats.git # Change directory to the repository. cd XDP-Stats # Build the program. make # Install the program. The program is installed to /usr/bin/xdpstats sudo make install Credits @Christian GitHub Repository & Source Code
  19. A personal tool using Python's Scrapy framework to scrape Best Buy's product pages for RTX 3080 TIs and notify if available/not sold out. My first project using Python's Scrapy framework. I'm using this project personally for a couple friends of mine and I. Basically, it scrapes a products listing page from BestBuy that lists RTX 3080 TIs. It scans each product and if the c-button-disable class doesn't exist within each entry (indicating it is not sold out and available), it will email a list of users from the settings.py file. It keeps each ID tracked in SQLite to make sure users don't get emailed more than once. Requirements The Scrapy framework is required and may be installed with the following. python3 -m pip install scrapy Settings Settings are configured in the src/bestbuy_parser/bestbuy_parser/settings.py file. The following are defaults. # General Scrapy settings. BOT_NAME = 'bestbuy_parser' SPIDER_MODULES = ['bestbuy_parser.spiders'] NEWSPIDER_MODULE = 'bestbuy_parser.spiders' TELNETCONSOLE_ENABLED = False LOG_LEVEL = 'ERROR' # The User Agent used to crawl. USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36' # Obey robots.txt rules ROBOTSTXT_OBEY = True # Best Buy Parser-specific settings. # The email's subject to send. MAIL_SUBJECT = "RTX 3080 TI In Stock On Best Buy!" # Where the email is coming from. MAIN_FROM = "[email protected]" # The email body. MAIL_BODY = '<html><body><ul><li><a href="https://www.bestbuy.com{link}">{name}</a></li><li>{price}</li></ul></body></html> Running The Program You must change the working directory to src/bestbuy_parser/bestbuy_parser via cd. Afterwards, you may run the following. python3 parse.py This will run the program until a keyboard interrupt. Systemd Service A systemd service is included in the systemd/ directory. It is assuming you cloned the repository into /usr/src (you will need to change the systemd file if this is not correct). You may install the systemd service via the following command as root (or ran with sudo). sudo make install Credits @Christian GitHub Repository & Source Code
  20. A repository that includes common helper functions for writing applications in the DPDK. I will be using this for my future projects in the DPDK. This project includes helpful functions and global variables for developing applications using the DPDK. I am using this for my projects using the DPDK. A majority of this code comes from the l2fwd example from the DPDK's source files, but I rewrote all of the code to learn more from it and I tried adding as many comments as I could explaining what I understand from the code. I also highly organized the code and removed a lot of things I thought were unnecessary in developing my applications. I want to make clear that I am still new to the DPDK. While the helper functions and global variables in this project don't allow for in-depth configuration of the DPDK application, it is useful for general setups such as making packet generator programs or wanting to make a fast packet processing library where you're inspecting and manipulating packets. My main goal is to help other developers with the DPDK along with myself. From what I've experienced, learning the DPDK can be very overwhelming due to the amount of complexity it has. I mean, have you seen their programming documentation/guides here?! I'm just hoping to help other developers learn the DPDK. As time goes on and I learn more about the DPDK, I will add onto this project! My Other Projects Using DPDK Common I have other projects in the pipeline that'll use DPDK Common once I implement a few other things. However, here is the current list. Examples/Tests - A repository I'm using to store examples and tests of the DPDK while I learn it. The Custom Return Structure This project uses a custom return structure for functions returning values (non-void). The name of the structure is dpdkc_ret. struct dpdkc_ret { char *gen_msg; int err_num; int port_id; int rx_id; int tx_id; __u32 data; void *dataptr; }; With that said, the function dpdkc_check_ret(struct dpdkc_ret *ret) checks for an error in the structure and exits the application with debugging information if there is an error found (!= 0). Any data from the functions returning this structure should be stored in the data pointer. You will need to cast when using this data in the application since it is of type void *. Functions Including the src/dpdk_common.h header in a source or another header file will additionally include general header files from the DPDK. With that said, it will allow you to use the following functions which are a part of the DPDK Common project. /** * Initializes a DPDK Common result type and returns it with default values. * * @return The DPDK Common return structure (struct dpdkc_ret) with its default values. **/ struct dpdkc_ret dpdkc_ret_init(); /** * Parses the port mask argument and stores it in the enabled_port_mask global variable. * * @param arg A (const) pointer to the optarg variable from getopt.h. * * @return The DPDK Common return structure (struct dpdkc_ret). The port mask is stored in ret->data. **/ struct dpdkc_ret dpdkc_parse_arg_port_mask(const char *arg); /** * Parses the port pair config argument. * * @param arg A (const) pointer to the optarg variable from getopt.h. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_parse_arg_port_pair_config(const char *arg); /** * Parses the queue number argument and stores it in the global variable(s). * * @param arg A (const) pointer to the optarg variable from getopt.h. * @param rx Whether this is a RX queue count. * @param tx Whether this is a TX queue count. * * @return The DPDK Common return structure (struct dpdkc_ret). The amount of queues is stored in ret->data. **/ struct dpdkc_ret dpdkc_parse_arg_queues(const char *arg, int rx, int tx) /** * Checks the port pair config after initialization. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_check_port_pair_config(void); /** * Checks and prints the status of all running ports. * * @return Void **/ void dpdkc_check_link_status(); /** * Initializes the DPDK application's EAL. * * @param argc The argument count. * @param argv Pointer to arguments array. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_eal_init(int argc, char **argv); /** * Retrieves the amount of ports available. * * @return The DPDK Common return structure (struct dpdkc_ret). Number of available ports are stored inside of ret->data. **/ struct dpdkc_ret dpdkc_get_nb_ports(); /** * Checks all port pairs. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_check_port_pairs(); /** * Checks all ports against port mask. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_ports_are_valid(); /** * Resets all destination ports. * * @return Void **/ void dpdkc_reset_dst_ports(); /** * Populates all destination ports. * * @return Void **/ void dpdkc_populate_dst_ports(); /** * Maps ports and queues to each l-core. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_ports_queues_mapping(); /** * Creates the packet's mbuf pool. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_create_mbuf(); /** * Initializes all ports and RX/TX queues. * * @param promisc If 1, promisc mode is turned on for all ports/devices. * @param rx_queues The amount of RX queues per port (recommend setting to 1). * @param tx_queues The amount of TX queues per port (recommend setting to 1). * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_ports_queues_init(int promisc, int rx_queues, int tx_queues); /** * Check if the number of available ports is above one. * * @return The DPDK Common return structure (struct dpdkc_ret). The amount of available ports is returned in ret->data. **/ struct dpdkc_ret dpdkc_ports_available(); /** * Retrieves the amount of l-cores that are enabled and stores it in nb_lcores variable. * * @return The DPDK Common return structure (struct dpdkc_ret). The amount of available ports is returned in ret->data. **/ struct dpdkc_ret dpdkc_get_available_lcore_count() /** * Launches the DPDK application and waits for all l-cores to exit. * * @param f A pointer to the function to launch on all l-cores when ran. * * @return Void **/ void dpdkc_launch_and_run(void *f); /** * Stops and removes all running ports. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_port_stop_and_remove(); /** * Cleans up the DPDK application's EAL. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_eal_cleanup(); /** * Checks error from dpdkc_ret structure and prints error along with exits if found. * * @return Void **/ void dpdkc_check_ret(struct dpdkc_ret *ret); The following function(s) are available if USE_HASH_TABLES is defined. /** * Removes the least recently used item from a regular hash table if the table exceeds max entries. * * @param tbl A pointer to the hash table. * @param max_entries The max entries in the table. * * @return 0 on success or -1 on error (failed to delete key from table). **/ int check_and_del_lru_from_hash_table(void *tbl, __u64 max_entries); Global Variables Additionally, there are useful global variables directed towards aspects of the program for the DPDK. However, these are prefixed with the extern tag within the src/dpdk_common.h header file allowing you to use them anywhere else assuming the file is included and the object file built from make is linked. // Variable to use for signals. volatile __u8 quit; // The RX and TX descriptor sizes (using defaults). __u16 nb_rxd = RTE_RX_DESC_DEFAULT; __u16 nb_txd = RTE_TX_DESC_DEFAULT; // The enabled port mask. __u32 enabled_port_mask = 0; // Port pair params array. struct port_pair_params port_pair_params_array[RTE_MAX_ETHPORTS / 2]; // Port pair params pointer. struct port_pair_params *port_pair_params; // The number of port pair parameters. __u16 nb_port_pair_params; // The port config. struct port_conf ports[RTE_MAX_ETHPORTS]; // The amount of RX ports per l-core. unsigned int rx_port_pl = 1; // The amount of TX ports per l-core. unsigned int tx_port_pl = 1; // The amount of RX queues per port. unsigned int rx_queue_pp = 1; // The amount of TX queues per port. unsigned int tx_queue_pp = 1; // The queue's lcore config. struct lcore_port_conf lcore_port_conf[RTE_MAX_LCORE]; // The buffer packet burst. unsigned int packet_burst_size = MAX_PCKT_BURST_DEFAULT; // The ethernet port's config to set. struct rte_eth_conf port_conf = { .rxmode = { .split_hdr_size = 1 }, .rxmode = { .mq_mode = RTE_ETH_MQ_TX_NONE } }; // A pointer to the mbuf_pool for packets. struct rte_mempool *pcktmbuf_pool = NULL; // The current port ID. __u16 port_id = 0; // Number of ports and ports available. __u16 nb_ports = 0; __u16 nb_ports_available = 0; // L-core ID. unsigned int lcore_id = 0; // Number of l-cores. unsigned int nb_lcores = 0; Credits @Christian GitHub Repository & Source Code
  21. A small repository I will be using to store my progress and test programs from the DPDK, a kernel bypass library very useful for fast packet processing. The DPDK is perhaps one of the fastest libraries used with network packet processing. This repository uses my DPDK Common project in an effort to make things simpler. WARNING - I am still adding more examples as time goes on and I need to test new functionality/methods. Requirements The DPDK - Intel's Data Plane Development Kit which acts as a kernel bypass library which allows for fast network packet processing (one of the fastest libraries out there for packet processing). The DPDK Common - A project written by me aimed to make my DPDK projects simpler to setup/run. Building The DPDK If you want to build the DPDK using default options, the following should work assuming you have the requirements such as ninja and meson. # Clone the DPDK repository. git clone https://github.com/DPDK/dpdk.git # Change directory. cd dpdk/ # Use meson build. meson build # Change directory to build/. cd build # Run Ninja. ninja # Run Ninja install as root via sudo. sudo ninja install # Link libraries and such. sudo ldconfig All needed header files from the DPDK will be stored inside of /usr/local/include/. You may receive ninja and meson using the following. # Update via `apt`. sudo apt update # Install Python PIP (version 3). sudo apt install python3 python3-pip # Install meson. Pip3 is used because 'apt' has an outdated version of Meson usually. sudo pip3 install meson # Install Ninja. sudo apt install ninja-build Building The Source Files You may use git and make to build the source files inside of this repository. git clone --recursive https://github.com/gamemann/The-DPDK-Examples.git cd The-DPDK-Examples/ make Executables will be built inside of the build/ directory by default. EAL Parameters All DPDK applications in this repository supports DPDK's EAL paramters. These may be found here. This is useful for specifying the amount of l-cores and ports to configure for example. Examples Drop UDP Port 8080 (Tested And Working) In this DPDK application, any packets arriving on UDP destination port 8080 will be dropped. Otherwise, if the packet's ethernet header type is IPv4 or VLAN, it will swap the source/destination MAC and IP addresses along with the UDP source/destination ports then send the packet out the TX path (basically forwarding the packet from where it came). In additional to EAL parameters, the following is available specifically for this application. -p --portmask => The port mask to configure (e.g. 0xFFFF). -P --portmap => The port map to configure (in '(x, y),(b,z)' format). -q --queues => The amount of RX and TX queues to setup per port (default and recommended value is 1). -x --promisc => Whether to enable promiscuous on all enabled ports. -s --stats => If specified, will print real-time packet counter stats to stdout. Here's an example. ./dropudp8080 -l 0-1 -n 1 -- -q 1 -p 0xff -s Simple Layer 3 Forward (Tested And Working) In this DPDK application, a simple routing hash table is created with the key being the destination IP address and the value being the MAC address to forward to. Routes are read from the /etc/l3fwd/routes.txt file in the following format. <ip address> <mac address in xx:xx:xx:xx:xx:xx> The following is an example. 10.50.0.4 ae:21:14:4b:3a:6d 10.50.0.5 d6:45:f3:b1:a4:3d When a packet is processed, we ensure it is an IPv4 or VLAN packet (we offset the packet data by four bytes in this case so we can process the rest of the packet without issues). Afterwards, we perform a lookup with the destination IP being the key on the route hash table. If the lookup is successful, the source MAC address is replaced with the destination MAC address (packets will be going out the same port they arrive since we create a TX buffer and queue) and the destination MAC address is replaced with the MAC address the IP was assigned to from the routes file mentioned above. Otherwise, the packet is dropped and the packet dropped counter is incremented. In additional to EAL parameters, the following is available specifically for this application. -p --portmask => The port mask to configure (e.g. 0xFFFF). -P --portmap => The port map to configure (in '(x, y),(b,z)' format). -q --queues => The amount of RX and TX queues to setup per port (default and recommended value is 1). -x --promisc => Whether to enable promiscuous on all enabled ports. -s --stats => If specified, will print real-time packet counter stats to stdout. Here's an example. ./simple_l3fwd -l 0-1 -n 1 -- -q 1 -p 0xff -s Rate Limit (Tested And Working) In this application, if a source IP equals or exceeds the packets per second or bytes per second specified in the command line, the packets are dropped. Otherwise, the ethernet and IP addresses are swapped along with the TCP/UDP ports and the packet is forwarded back out the TX path. Packet stats are also included with the -s flag. The following command line options are supported. -p --portmask => The port mask to configure (e.g. 0xFFFF). -P --portmap => The port map to configure (in '(x, y),(b,z)' format). -q --queues => The amount of RX and TX queues to setup per port (default and recommended value is 1). -x --promisc => Whether to enable promiscuous on all enabled ports. -s --stats => If specified, will print real-time packet counter stats to stdout. --pps => The packets per second to limit each source IP to. --bps => The bytes per second to limit each source IP to. Here's an example: ./ratelimit -l 0-1 -n 1 -- -q 1 -p 0xff -s NOTE - This application supports LRU recycling via a custom function I made in the DPDK Common project, check_and_del_lru_from_hash_table(). Make sure to define USE_HASH_TABLES before including the DPDK Common header file when using this function. Least Recently Used Test (Tested And Working) This is a small application that implements a manual LRU method for hash tables. For a while I've been trying to get LRU tables to work from these libraries. However, I had zero success in actually getting the table initialized. Therefore, I decided to keep using these libraries instead and implement my own LRU functionality. I basically use the rte_hash_get_key_with_position() function to retrieve the oldest key to delete. However, it appears the new entry is inserted at the position that was most recently deleted so you have to keep incrementing the position value up to the max entries of the table. With that said, once the position value exceeds the maximum table entries, you need to set it back to 0. No command line options are needed, but EAL parameters are still supported. Though, they won't make a difference. Here's an example: ./ratelimit Credits @Christian GitHub Repository & Source Code
  22. This is a GitHub Follow Bot made inside of a Django application. Management of the bot is done inside of Django's default admin center (/admin). The bot itself runs in the background of the Django application. The bot works as the following. Runs as a background task in the Django application. Management of bot is done in the Django application's web admin center. After installing, you must add a super user via Django (e.g. python3 manage.py createsuperuser). Navigate to the admin web center and add your target user (the user who will be following others) and seeders (users that start out the follow spread). After adding the users, add them to the target and seed user list. New/least updated users are parsed first up to the max users setting value followed by a random range wait scan time. A task is ran in the background for parsed users to make sure they're being followed by target users. Another task is ran in the background to retrieve target user's followers and if the Remove Following setting is on, it will automatically unfollow these specific users for the target users. Another task is ran that checks all users a target user is following and unfollows the user after x days (0 = doesn't unfollow). Each follow and unfollow is followed by a random range wait time which may be configured. To Do Develop a more randomized timing system including most likely active hours of the day. See if I can use something better in Django to alter general settings instead of relying on a table in the SQLite database. There are also issues with synchronization due to limitations with Django at this moment. Requirements The following Python models are required and I'd recommend Python version 3.8 or above since that's what I've tested with. django aiohttp You can install them like the below. # Python < 3 python -m pip install django python -m pip install aiohttp pip install django pip install aiohttp # Python >= 3 python3 -m pip install django python3 -m pip install aiohttp pip3 install django pip3 install aiohttp My Motives A few months ago, I discovered a few GitHub users following over 100K users who were obviously using bots. At first I was shocked because I thought GitHub was against massive following users, but after reading more into it, it appears they don't mind. This had me thinking what if I started following random users as well. Some of these users had a single GitHub.io project that received a lot of attention and I'd assume it's from all the users they were following. I decided to try this. I wanted to see if it'd help me connect with other developers and it certainly did/has! Personally, I haven't used a bot to achieve this, I was actually going through lists of followers from other accounts and following random users. As you'd expect, this completely cluttered my home page, but it also allowed me to discover new projects which was neat in my opinion. While this is technically 'spam', the good thing I've noticed is it certainly doesn't impact the user I'm following much other than adding a single line in their home page stating I'm following them (or them receiving an email stating this if they have that on). Though, I could see this becoming annoying if many people/bots started doing it (perhaps GitHub could add a user setting that has a maximum following count of a user who can follow them or receive notifications when the user follows). I actually think it's neat this is allowed so far because it allows others to discover your projects. Since I have quite a few networking projects on this account, I've had some people reach out who I followed stating they found my projects neat because they aren't into that field. I also wouldn't support empty profiles made just for the purpose of mass following. USE AT YOUR OWN RISK Even though it appears GitHub doesn't mind users massive following others (which I again, support), this is still considered a spam tactic and it is still technically against the rules. Therefore, please use this tool at your own risk. I'm not even going to be using it myself because I do enjoy manually following users. I made this project to learn more about Python. Settings Inside of the web interface, a settings model should be visible. The following settings should be inserted. enabled - Whether to enable the bot or not (should be "1" or "0"). max_scan_users - The maximum users to parse at once before waiting for scan time. wait_time_follow_min - The minimum number of seconds to wait after following or unfollowing a user. wait_time_follow_max - The maximum number of seconds to wait after following or unfollowing a user. wait_time_list_min - The minimum number of seconds to wait after parsing a user's followers page. wait_time_list_max - The maximum number of seconds to wait after parsing a user's followers page. scan_time_min - The minimum number of seconds to wait after parsing a batch of users. scan_time_max - The maximum number of seconds to wait after parsing a batch of users. verbose - Verbose level for stdout (see levels below). + Notification when a target user follows another user. + Notification when a target user unfollows a user due to being on the follower list or purge. + Notification when users are automatically created from follow spread. user_agent - The User Agent used to connect to the GitHub API. seed - Whether to seed (add any existing user's followers to the user list). seed_min_free - If above 0 and seeding is enabled, seeding will only occur when the amount of new users (users who haven't been followed by any target users) is below this value. max_api_fails - The max amount of GitHub API fails before stopping the bot for a period of time based off of below (0 = disable). lockout_wait_min - When the amount of fails exceeds max API fails, it will wait this time minimum in minutes until starting up again. lockout_wait_max - When the amount of fails exceeds max API fails, it will wait this time maximum in minutes until starting up again. seed_max_pages - The max amount of pages to seed from with each user parse when looking for new users (seeding). Installation Installation should be performed like a regular Django application. This application uses SQLite as the database. You can read more about Django here. I would recommend the following commands. # Make sure Django and aiohttp are installed for this user. # Clone repository. git clone https://github.com/gamemann/GitHub-Follower-Bot.git # Change directory to Django application. cd GitHub-Follower-Bot/src/github_follower # Migrate database. python3 manage.py migrate # Run the development server on any IP (0.0.0.0) as port 8000. # NOTE - If you don't want to expose the application publicly, bind it to a LAN IP instead (e.g. 10.50.0.4:8000 instead 0f 0.0.0.0:8000). python3 manage.py runserver 0.0.0.0:8000 # Create super user for admin web interface. python3 manage.py createsuperuser The web interface should be located at http://<host/ip>:<port>. For example. http://localhost:8000 While you could technically run the Django application's development server for this bot since only the settings are configured through there, Django recommends reading this for production use. FAQ Why did you choose Django to use as an interface? While settings could have been configured on the host itself, I wanted an interface that was easily accessible from anywhere. The best thing for this would be a website in my opinion. Most of my experience is with Django which is why I chose that project. Credits @Christian GitHub Repository & Source Code
  23. hell YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
  24. Viva New Vegas https://vivanewvegas.github.io/ by VishVadeva50 With a few adjustments this pack for New Vegas was one of my favorite experiences with the game. FONV ultimate edition is $5 for what could be 500+ hours of content in the base game let alone a game with all these additional features, bug fixes, cut dialog, and so on... You can use this as a base modding experience that sifts through a lot of the dependencies and compatibility errors you'd deal with yourself while trying to have the most holistic RPG approach. Bethesda's modding community makes all their games much more enjoyable for me and if you haven't played the Fallout games they're very cheap and very enjoyable. Sidenote I think it's sad that FO76 hurt the name of Fallout and since Bethesda has been absorbed by Microsoft now I think we're gonna find out soon with Starfield whether we'll be getting any good games from them anytime soon. I'm thinking it could be hit or miss but Bethesda's new stuff doesn't have the magic it used to. Not sure how much I enjoy Starfield being on Creation Engine given what kind of RPG experience they're going for and I think if they put the next Elder Scrolls on Creation Engine the community will hate the game. If big studios wanna try to put games like FO76 into reality it might just be easier for everyone to be honest and step into the future. The amount of janky optimization fixes you need to do to get FO76 to play like a game released in 2018 playing on a 2080TI is not okay lol. With the huge amount of money invested and acquisitions of studios happening left and right the last couple years the gamers wanna know: where the fuck are all the GOOD VIDEO GAMES?
  25. Every Friday at 9:00 PM EST, we will be having a community event night. This includes game, karaoke, and movie nights! Event Coordinator - @CallyPally @mbs Location - Our Discord server!
  26. List of Open Source Software: Peer To Peer: Defined Networking / Slack Nebula: Information: Written In Golang Use Case: best for server-to-server and server-to-network infrastructure GitHub: https://github.com/slackhq/nebula Website: https://www.defined.net/ Tailscale: Information: Uses WireGuard and written In Golang Use Case: best for user/server-to-server and user/server-to-network GitHub: https://github.com/tailscale/tailscale Website: https://tailscale.com/ ZeroTier: Information: Written In C/C++ Use Case: best for user-to-user or user-to-server GitHub: https://github.com/zerotier/ZeroTierOne Website: https://www.zerotier.com/ Nebula REST API: (Management API for Deploying Nebula) GitHub: https://github.com/elestio/nebula-rest-api Headscale: (For Tailscale Self-Hosting) GitHub: https://github.com/juanfont/headscale VPNs: Pritunl: Information: OpenVPN Based and written In Python Use Case: best for user-to-user or user-to-network, and supports high-availability. GitHub: https://github.com/pritunl/pritunl Website: https://pritunl.com/ SoftEther: Use Case: best for user-to-user or user-to-network GitHub: https://github.com/SoftEtherVPN/SoftEtherVPN/ Website: https://www.softether.org/ Tutorials & Information: About Nebula: https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack/ Slack Nebula is production ready with support to saturate 10+Gbps links as tested by Slack in production.
  27. Earlier
  28. What we see will feel closer to slavery than close-to-conscious AI for a long time probably. Although that's not the same as saying that ML and AI aren't useful, because for research that's definitely not true. I'd much rather hope for us to focus continuing to use programs as tools then to make them as complicated and riddled with illogical bullshit as humans are. At the same time, there's also use-cases for a close-to-reality AI that goes beyond being someone's robot girlfriend.
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site you agree to the Terms of Use and Privacy Policy. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.