Search the Community
Showing results for tags 'guide'.
-
For the last week or so, I've been trying to setup Steam Link on my Raspberry Pi 4 Model B devices. Along the way, I've ran into a few issues, some in which are documented in a separate GitHub repository I made here. Unfortunately, I found that information on the Internet related to setting up Steam Link on a Raspberry Pi is very scattered and scarce, especially when wanting to stream at 120 Hz/FPS, which is still fairly new to Steam Link. With that said, I also found that newer OS releases of the Raspberry Pi OS either have bad performance while running Steam Link or don't work at all. That is why we will be using a legacy version of Raspberry Pi OS called Buster Lite, which runs on 32-bit. My main goal is to stream games from my gaming desktop to a new gaming projector I recently purchased called the BenQ TH685P using Steam Link on my Raspberry Pi. I wanted to stream at 1920x1080 @ 120 Hz/FPS. While my goal was to stream at 120Hz, this guide should also work for refresh rates up to 144 Hz with some small adjustments since that's the highest refresh rate Steam Link supports at this moment. With that said, you may notice videos and screenshots of my Steam Link stream are actually only around ~804 or so pixels in height and not truly 1080P. This is just specific to my setup due to my computer monitor's aspect ratio. If you use a monitor with the standard 16:9 aspect ratio (e.g. 1920x1080) and stream via Steam Link, the game will be fully streamed at 1920x1080. Disclaimer - I apologize for any pictures and videos with bad quality when taken from my phone. Unfortunately, I don't have any high-quality cameras or video recording devices. Gameplay Video I wanted to note in this video that the game sound cuts off at the end due to my headset turning off which in return disabled its audio devices and impacted audio from Steam Link. Setup I strongly recommend using a wired connection for both your computer you want to stream from and your Raspberry Pi device that will be running Steam Link. Even when your Raspberry Pi is right next to your router/wireless access point, using wireless will still likely result in hiccups every once and a while causing noticeable performance issues. Gaming Desktop My gaming desktop has the following specs. Windows 11 (22H2) RTX 3090 TI AMD Ryzen 9 5900X (12c/24t) 64 GBs DDR4 RAM 2 x 2 TBs NVMe (Samsung 970 EVO and Samsung 980 PRO) 1 gbps on-board NIC (wired) Projector I have a BenQ TH685P projector that I want to stream games to using Steam Link. It supports running at 1080P@120 Hz/FPS! Raspberry Pi 4 Model B I'm setting up Steam Link on a Raspberry Pi 4 Model B device with 4 cores and 4 GBs of RAM. MicroSD Card & Flasher I am using a SanDisk 128 GBs MicroSD card with a USB flasher from Anker. Controller I am using an Xbox Core Wireless Controller (Carbon Black) with BlueTooth. Monitor For Testing I use an Acer KC242Y monitor (1920x1080 @ 100 Hz) with a KVM switch (between my Raspberry Pi and one of my home servers) when setting up Raspberry Pi devices since this allows me to use a keyboard/mouse easily. After setting up the Raspberry Pi, I then connect it to my projector since I wouldn't need a keyboard/mouse at that point due to using a controller. Flashing MicroSD Card & Installing Raspberry Pi OS As mentioned in the overview, we will be using Raspberry Pi OS Buster Lite in this guide. This is because from the experiments I've concluded, Steam Link on Bookworm has broken packages and Steam Link on Bullseye has noticeably bad performance (high display latency and frame loss). Download & Install Raspberry Pi Imager Firstly, you'll want to download Raspberry Pi Imager from here. This program allows you to easily flash your MicroSD card with a new Raspberry Pi OS. Download Raspberry Pi OS Buster Lite Next, you'll want to download the Raspberry Pi OS Buster Lite image file from here. A direct link to 2022-04-04-raspios-buster-armhf-lite.img.xz may be found here. After downloading the file, you will need to extract the image file using a program such as 7-Zip that supports uncompressing .xz files. Flash Raspberry Pi OS Buster Lite Now, you'll want to open Raspberry Pi Imager and you should see something like below. Click the Choose OS button under the Operating System text and this will open a new scrollable menu. Scroll down to the bottom of the menu and choose Use custom. Now you'll want to select the Raspberry Pi OS Buster Lite image file you've extracted from earlier. Afterwards, click the Choose Storage button under the Storage text. You will now select the MicroSD card you want to flash the OS to. You should now be able to click the Write button to flash the image to the MicroSD card. This will take a minute or two depending on the speed of your MicroSD card. A popup like below will show up once the image is written to the MicroSD card. You may hit continue and take out your MicroSD card. You'll want to insert your MicroSD card into your Raspberry Pi like below. Connect Raspberry Pi To Monitor & Boot Next, you'll want to connect your Raspberry Pi to your monitor or projector. You will need a keyboard and mouse connected to the Raspberry Pi for the initial setup steps. We will be trying to use SSH as much as possible when the time comes. In this guide, I will be using my monitor for testing to setup the Raspberry Pi itself, but after it is setup, I will be plugging it into my projector. Login & Enable OpenSSH After booting your Raspberry Pi, you will need to login. The default username is pi and the default password is raspberry. The first thing you'll want to do after logging in is enabling OpenSSH. OpenSSH will allow you to SSH to the Raspberry Pi device from your computer using a Linux terminal such as PuTTY or MobaXterm (what I personally use). To enable OpenSSH, first you'll want to execute the sudo raspi-config command which will open a menu showing utilities and settings for the Raspberry Pi. Next, use your arrow keys to go down to Interface Options and then hit enter to select. This will bring up a menu like the following. Now, use your arrow keys to go down to P2 SSH and then hit enter to select. This will prompt you to enable or disable SSH. Make sure to select yes and hit enter again. Once you've enabled SSH, it will show the following. Afterwards, you can hit enter to go back to the main menu. Now you should be able to SSH to your Raspberry Pi assuming it belongs to the same network as your computer that you want to SSH from. While this isn't required, it will make troubleshooting issues easier, especially after you enable a systemd service that automatically restarts Steam Link each time it closes on the main TTY. You can find the IP of your Raspberry Pi using the ip a or ifconfig commands. In this case, the IP of my Raspberry Pi device is 192.168.11.103 which I have running under its own VLAN. You can SSH to your Raspberry Pi using a Linux terminal with the following command. ssh [email protected] Obviously, you'll want to replace 192.168.11.103 with your Raspberry Pi's IP address. Change User Password & Update Device The first thing you'll want to do after logging in through SSH is to change your pi user's password. While it isn't required, if you expose OpenSSH on your Raspberry Pi to the Internet with raspberry as the password, you are potentially risking security of your devices depending on what your Raspberry Pi device has access to on your network. If you only have the Raspberry Pi device operating only within your LAN and can't be bothered to change the password, you can skip this step if you want to without much of a security risk. You can change the user password by executing the passwd command, typing in the current user's password (raspberry), and then typing in your new password twice. Next, you'll want to update/upgrade the current system using the following command. sudo apt update && sudo apt upgrade -y Setting Up Autologin As of right now, when you boot your Raspberry Pi device, it will require you to login from the main TTY connected to your monitor or projector. This would become annoying when trying to use the Steam Link, so you'll want to set it so that it automatically logs into the user pi at the main TTY. To do this, execute the sudo raspi-config command and select System Options. Next, use your arrow keys to select the Boot / Auto Login option and hit enter. You'll want to select the second option which is named Console Autologin Text Console, automatically logged in as 'pi' user. After hitting enter, it will bring you back to the main menu and you'll want to select Finish. Allocating More GPU Memory By default, only 16 MBs of GPU memory are allocated to the Raspberry Pi OS. Steam Link recommends at least 128 MBs which should be more than enough. However, I allocate 256 MBs just to be safe since Steam Link is the only application with a GUI I have running anyways. To allocate more memory, first execute the sudo raspi-config command to bring up the utilities menu and use your arrow keys to select Performance Options. Afterwards, hit enter. Next, use your arrow keys to select GPU Memory and hit enter. You will now want to input 128 or 256 in the box shown below. Afterwards, use your arrow keys to select Ok and hit enter to save. Enabling 4K60 While I don't believe I needed this for my setup since I'm streaming at 1080P@120 Hz, I still recommend enabling the 4K 60 Hz option just to be safe with your setup. To enable this option, firstly, execute the sudo raspi-config command to bring up the utilities menu and use your arrow keys to select Advanced Options. Afterwards, hit enter. Next, use your arrow keys to select HDMI / Composite and hit enter. Now hit enter on Enable 4Kp60 HDMI. You should receive a message stating the feature was enabled. Make FKMS Support Over 60FPS FKMS is the video driver the Raspberry Pi uses, but from what I've seen, has issues with running over 60FPS out of the box. Therefore, you need to edit a file located at /boot/cmdline.txt and prepend vc4.fkms_max_refresh_rate=<maxFPS>. In my case, since I'm streaming at 120 Hz, I prepended vc4.fkms_max_refresh_rate=120 to the beginning of the only line. To edit the file, you can use a text editor called Nano via the nano /boot/cmdline.txt command. You should see something like this. Now, prepend the value from above so that the single line looks something like below. vc4.fkms_max_refresh_rate=120 console=serial0,115200 console=tty1 root=PARTUUID=b3485475-02 rootfstype=ext4 fsck.repair=yes rootwait Afterwards, hit CTRL + X and then Y to save the file. Enable Other Useful Config Options There are a few other HDMI-specific settings I've enabled manually by editing the /boot/config.txt file directly. You can edit the Raspberry Pi config file by using Nano via the nano /boot/config.txt command. When editing this file, you'll want to uncomment the following lines by removing the # in-front. # hdmi_force_hotplug=1 # hdmi_drive=2 # config_hdmi_boost=4 hdmi_force_hotplug=1 - To my understanding, this forces the Raspberry Pi to keep trying to detect for a video source from HDMI if not detected already. When this was disabled on my end, I would constantly need to restart my Raspberry Pi due to the video source not re-detecting. This happened a lot because Steam Link would result in the HDMI source going out and in when starting the application/streaming. hdmi_drive=2 - This setting helps with HDMI audio. I don't know if it is really required, but it doesn't hurt having enabled. config_hdmi_boost=4 - This setting boosts the HDMI signal. I had an issue where streaming at 120Hz to my projector resulted in my projector losing signal to the HDMI source immediately and I believe this setting corrected the issue. Your config file should look something like this now. # For more options and information see # http://rpf.io/configtxt # Some settings may impact device functionality. See link above for details # uncomment if you get no picture on HDMI for a default "safe" mode #hdmi_safe=1 # uncomment this if your display has a black border of unused pixels visible # and your display can output without overscan #disable_overscan=1 # uncomment the following to adjust overscan. Use positive numbers if console # goes off screen, and negative if there is too much border #overscan_left=16 #overscan_right=16 #overscan_top=16 #overscan_bottom=16 # uncomment to force a console size. By default it will be display's size minus # overscan. #framebuffer_width=1280 #framebuffer_height=720 # uncomment if hdmi display is not detected and composite is being output hdmi_force_hotplug=1 # uncomment to force a specific HDMI mode (this will force VGA) # hdmi_group=1 # hdmi_mode=63 # uncomment to force a HDMI mode rather than DVI. This can make audio work in # DMT (computer monitor) modes hdmi_drive=2 # uncomment to increase signal to HDMI, if you have interference, blanking, or # no display config_hdmi_boost=4 # uncomment for composite PAL #sdtv_mode=2 #uncomment to overclock the arm. 700 MHz is the default. #arm_freq=800 # Uncomment some or all of these to enable the optional hardware interfaces #dtparam=i2c_arm=on #dtparam=i2s=on #dtparam=spi=on # Uncomment this to enable infrared communication. #dtoverlay=gpio-ir,gpio_pin=17 #dtoverlay=gpio-ir-tx,gpio_pin=18 # Additional overlays and parameters are documented /boot/overlays/README # Enable audio (loads snd_bcm2835) dtparam=audio=on [pi4] # Enable DRM VC4 V3D driver on top of the dispmanx display stack dtoverlay=vc4-fkms-v3d max_framebuffers=2 [all] #dtoverlay=vc4-fkms-v3d gpu_mem=256 hdmi_enable_4kp60=1 Afterwards, hit CTRL + X and then hit Y to save the file. Setting Display Resolution You will most likely need to perform this step every time you connect your Raspberry Pi to a new video output with different resolutions and refresh rates unless if the video output automatically detects the best resolution and refresh rate (it did not in my case; My projector thought 2160P@30 Hz was acceptable for gaming in that case!). To set the proper resolution and refresh rate, execute the sudo raspi-config command and use your arrow keys to select Display Options. Afterwards, hit enter. Next, select Resolution and hit enter. You will want to find the best resolution and refresh rate for your video output in the menu. Afterwards, select Ok and hit enter. Note - I was performing these steps on the monitor I used for testing which only supports up to 100 Hz. If you're performing these steps while plugged into a video output that supports up to 120 - 144 Hz, you should see those options in the menus screenshotted above. Setting Up Controllers If you don't plan to use a controller, you may skip this step. I am using an Xbox Core Wireless Controller with BlueTooth. Install Xpadneo Firstly, you will want to install a Linux driver for controllers called xpadneo. You may execute the following commands to do so. # Install Git and Raspberry Pi kernel headers sudo apt install -y git dkms raspberrypi-kernel-headers # Clone xpadneo via Git git clone https://github.com/atar-axis/xpadneo.git # Change to xpadneo/ directory cd xpadneo # Execute install script sudo ./install.sh If you want to uninstall the driver, you may use the following command. sudo ./uninstall.sh Pairing Through Bluetooth You'll now want to pair the controller through BlueTooth, unless if you want to use a USB cable (if that's the case, you may skip this step). You can execute the sudo bluetoothctl command to jump into the BlueTooth CLI. Next, you'll want to execute the following commands. Please keep in mind lines starting with # are just comments and to not execute them. # Set default agent. default-agent # Start scanning for new devices scan on You will see a list of all BlueTooth devices near by. You will want to start pairing the controller at this time. Once you see the controller show up, you will then want to copy the MAC address which looks like xx:xx:xx:xx:xx:xx where each x is a random letter or number. You will now want to use the following command to attempt to connect to the controller. connect <mac address> Assuming the connection and pairing succeeds (controller light stops blinking and is steady along with vibrations indicating connection), you will then want to trust the device so that once it is disconnected, it will be able to reconnect automatically in the future without needing to repair. trust <mac address> There you have it! You should be able to use your controller in the next steps. Installing Steam Link You can now install Steam Link by using the following command. sudo apt install -y steamlink You will now have to go back to your keyboard/mouse connected to your Raspberry Pi and execute the steamlink command for the first time. This will install the rest of the dependencies and such. You will need to hit enter a few times and type Y at some point when installing the new packages. Once the packages are installed, Steam Link should start with a welcome message! Pairing Steam Link With Your Computer After hitting the Get Started button from the welcome message, you will either see existing computers on your network you can stream from or no computers. This depends on your network setup, but since my Steam Link is set up on its own VLAN, it couldn't find any existing computers. Therefore, I needed to hit the Other Computer button located at the bottom and pair my computer manually. This will show a pin like below. You will now need to open your Steam settings on your computer. From here, click the Remote Play item from the menu on the left side. Afterwards, click the Pair Steam Link button and enter the PIN you received in the Steam Link application running on your Raspberry Pi! The Steam Link application should now start connecting and you should see something like below. You can hit the Skip button unless if you want to perform a network test, which also can't hurt. You should see something like below afterwards indicating that you're ready to stream. Configuring Steam Link Settings On the main Steam Link application page, you will want to click the settings icon in the top-right corner. From here, you will want to click the Streaming button. You will now be on page 1/3 for streaming settings. The only option you need to change on here is Video from Balanced to Fast. You may not need to do this, but I found at times my controller would have input lag on any options other than Fast, even though display latency and frame loss were low. Next, you'll want to click the More... button in the middle-bottom area. This will take you to page 2/3. On this page, you'll want to set your framerate limit manually if you're trying to stream above 60 FPS. I've also enabled Show Details on Performance Overlay so that I could see graphs/stats for performance. I'd recommend enabling that option while setting up Steam Link to make troubleshooting easier and then disable it later on if you confirm things are running smoothly. I also wanted to note, if you set the Bandwidth Limit option to Unlimited, you will most likely experience high display latency and frame loss issues, especially with controllers which I've experienced and documented here. What's weird is I was using 90 mbps at most and that shouldn't have been an issue on my network since I had a wired 1 gbps NIC for the Raspberry Pi and I also confirmed my Raspberry Pi and computer can communicate at ~850 mbps using iperf3. The only thing I can think of is the Raspberry Pi's processing power is insufficient for using that much bandwidth specific to streaming and the Steam Link or a software-related issue. Anyways, if you click the More... button again, you'll come to page 3/3. I didn't change any settings here because I didn't need to, but you can try adjusting these settings if you'd like. Automatically Starting Steam Link On Boot You can create a systemd service to automatically start Steam Link on boot. Make sure you've enabled Auto Login and OpenSSH as documented above before automatically starting Steam Link on boot, though. You can create a systemd service file for Steam Link using Nano via the nano /etc/systemd/system/steamlink.service command. Afterwards, you can paste the following in the file. [Unit] Description=Steam Link [Service] Type=simple User=pi ExecStart=/usr/bin/steamlink Restart=always [Install] WantedBy=multi-user.target Please note that Restart=always will automatically restart Steam Link when it is manually closed. I added this to the service file because I kept accidentally closing the Steam Link application using my controller and I got annoyed manually starting the application back up afterwards each time. I would also recommend having OpenSSH enabled if you use this service since you will need to wait until the fail count is reached before systemd stops automatically restarting Steam Link. This would be very annoying to deal with on the main TTY since it heavily lags user input (even switching between different TTY's is impacted by this from my experience). You may hit CTRL + X and then Y to save the file. To enable the service on boot, you need to execute the following command. sudo systemctl enable steamlink If you want to disable the service, you can use the following command instead. sudo systemctl disable steamlink Note - You may use sudo systemctl start steamlink to manually start the Steam Link application or sudo systemctl stop steamlink to stop the application. You may now reboot the Raspberry Pi device, and see if it automatically logs into the user and starts Steam Link. Ready To Game! We are now ready to game! At this point, I've moved my Raspberry Pi from my testing environment/monitor to my projector. Now, at the main Steam Link menu, press or click the Start Playing button. I don't have a picture of this menu on my projector, but if you've set a pin under your computer's Steam settings, you will need to input it like below. Steam's Big Picture has now launched on my projector. Now, let's play some Halo! We are now streaming Halo 2 at 1080P@120 Hz with around 14ms - 18ms display latency and <2% frame loss. This is pretty decent from the gameplay I've had! Here is the projector's display information showing us the projector is running at 1080P@120 Hz. Conclusion Ultimately, I really hope this guide helps others out there who are going through the same struggles I've gone through while trying to setup Steam Link on Raspberry Pi devices. This also confirms Raspberry Pi 4's hardware is capable of streaming above 60FPS comfortably. While I wouldn't recommend streaming competitive games due to the additional latency added through Steam Link and your network, I still think it works great for single-player games! If you have any questions or see ways to improve this guide, please post on this thread! Alternatives To Steam Link If you aren't having any success with Steam Link, you could try alternatives listed below! Moonlight Game Streaming I will continue adding to this list as I discover more game streaming software. More System Information Here are the outputs of commands showing more information on the Raspberry Pi device I have running Steam Link smoothfully at 1080P@120 Hz. # Kernel pi@raspberrypi:~ $ sudo uname -r 5.10.103-v7l+ pi@raspberrypi:~ $ sudo uname -a Linux raspberrypi 5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022 armv7l GNU/Linux # Release pi@raspberrypi:~ $ cat /etc/*-release PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" NAME="Raspbian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" # CPU info pi@raspberrypi:~ $ cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 3 (v7l) BogoMIPS : 108.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 processor : 1 model name : ARMv7 Processor rev 3 (v7l) BogoMIPS : 108.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 processor : 2 model name : ARMv7 Processor rev 3 (v7l) BogoMIPS : 108.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 processor : 3 model name : ARMv7 Processor rev 3 (v7l) BogoMIPS : 108.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 3 Hardware : BCM2711 Revision : c03115 Serial : 10000000b02fe8cf Model : Raspberry Pi 4 Model B Rev 1.5 # Memory info pi@raspberrypi:~ $ cat /proc/meminfo MemTotal: 3748168 kB MemFree: 3258348 kB MemAvailable: 3372512 kB Buffers: 16820 kB Cached: 248116 kB SwapCached: 0 kB Active: 104912 kB Inactive: 197116 kB Active(anon): 432 kB Inactive(anon): 79044 kB Active(file): 104480 kB Inactive(file): 118072 kB Unevictable: 33736 kB Mlocked: 16 kB HighTotal: 3080192 kB HighFree: 2749820 kB LowTotal: 667976 kB LowFree: 508528 kB SwapTotal: 102396 kB SwapFree: 102396 kB Dirty: 36 kB Writeback: 0 kB AnonPages: 70828 kB Mapped: 105500 kB Shmem: 42384 kB KReclaimable: 13004 kB Slab: 28156 kB SReclaimable: 13004 kB SUnreclaim: 15152 kB KernelStack: 1160 kB PageTables: 1864 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1976480 kB Committed_AS: 308604 kB VmallocTotal: 245760 kB VmallocUsed: 5380 kB VmallocChunk: 0 kB Percpu: 528 kB CmaTotal: 327680 kB CmaFree: 216452 kB # Partition info pi@raspberrypi:~ $ cat /proc/partitions major minor #blocks name 1 0 4096 ram0 1 1 4096 ram1 1 2 4096 ram2 1 3 4096 ram3 1 4 4096 ram4 1 5 4096 ram5 1 6 4096 ram6 1 7 4096 ram7 1 8 4096 ram8 1 9 4096 ram9 1 10 4096 ram10 1 11 4096 ram11 1 12 4096 ram12 1 13 4096 ram13 1 14 4096 ram14 1 15 4096 ram15 179 0 124835328 mmcblk0 179 1 262144 mmcblk0p1 179 2 124569088 mmcblk0p2 # Version info pi@raspberrypi:~ $ cat /proc/version Linux version 5.10.103-v7l+ (dom@buildbot) (arm-linux-gnueabihf-gcc-8 (Ubuntu/Linaro 8.4.0-3ubuntu1) 8.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #1529 SMP Tue Mar 8 12:24:00 GMT 2022 # USB info pi@raspberrypi:~ $ sudo lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub # PCI info pi@raspberrypi:~ $ sudo lspci 00:00.0 PCI bridge: Broadcom Limited Device 2711 (rev 20) 01:00.0 USB controller: VIA Technologies, Inc. VL805 USB 3.0 Host Controller (rev 01) # Bluetooth version [bluetooth]# version Version 5.50 Guide made by @ Christian !
-
I will be showing you how to install Nginx and PHP 7.1 on an Ubuntu 16.04 server. I made this tutorial while using a stock image of Ubuntu 16.04 LTS server. Guide made on January 29, 2018. Prerequisites Shell access to the Ubuntu 16.04 server. Basic knowledge of Linux. A user with root access (either the root user itself or running commands with sudo). A domain pointing at the Ubuntu 16.04 server (e.g. moddingcommunity.com). Installing PHP 7.1 First, you need to install the software-properties-common package via apt: sudo apt-get install software-properties-common This software provides an abstraction of the used apt repositories. It allows you to easily manage your distribution and independent software vendor software sources. Next, you need to add the PPA that includes PHP 7.1 packages via apt: sudo add-apt-repository ppa:ondrej/php Now you need to update apt to add the PHP 7.1 packages: sudo apt-get update Next, install the common PHP packages along with PHP FPM via apt: sudo apt-get install php7.1-common php7.1 php7.1-cli php7.1-fpm php7.1-json php7.1-opcache php7.1-readline -y Keep in mind, this includes the stock packages. You may list additional packages you would like installed by executing sudo apt-cache search php7.1 and running sudo apt-get install <packageName> -y to install them. Lastly, let’s enable the php7.1-fpm service by executing the following command: sudo systemctl enable php7.1-fpm This will make it so the PHP 7.1 FPM service automatically starts up on bootup. There you have it, PHP 7.1 should be installed! You should be able to use the same process with installing updated versions of PHP. For example, instead of using 7.1, you can use 7.2 and so on. Configuring PHP 7.1 + FPM Configuring PHP 7.1 is quite simple, but there are a few ways you can do it. I’m going to show you the way I normally configure PHP 7.1. Make A New User I normally run multiple websites off of one installation. Therefore, I have PHP FPM run under its own user/group for each website (this is typically more secure as well). In this case, we’re going to create a group named site1 and a user named site1. You can add the group with the following command: sudo groupadd site1 You can add the user with the following command (feel free to alter if you have enough knowledge): sudo useradd -m -d /var/www/site1 -g site1 -s /bin/false site1 This will automatically create a directory named site1 in /var/www (Nginx’s default root directory), you will need to put your site files in this folder. Create A New PHP 7.1 FPM Pool PHP FPM pools are configured in /etc/php/7.1/fpm/pool.d/. I normally copy www.conf for each pool/website I make and configure it to my needs. Let’s start by changing our current directory to /etc/php/7.1/fpm/pool.d/ to make files easier to copy (we won’t need to enter the full paths). You can do this by executing the following command: cd /etc/php/7.1/fpm/pool.d/ You shouldn’t need root access for this since the directory should be readable and executable by all users. Next, let’s copy the www.conf file for our site1 website: sudo cp www.conf site1.conf Afterwards, edit the new site1.conf file using the text editor of your choice. I personally prefer Vim: sudo vim site1.conf If you’re new to Vim, please read the manual to learn how to use it. You can read this tutorial here if you’d like. You will see something like this: ; Start a new pool named 'www'. ; the variable $pool can be used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'access.log' ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /run/php/php7.1-fpm.sock ; Set listen(2) backlog. ; Default Value: 511 (-1 on FreeBSD and OpenBSD) ;listen.backlog = 511 ... First, we want to change the first uncommented (line without a semicolon at the beginning) from [www] to [site1]. From [www] To [site1] Second, find the lines starting with user = and group =. We want to change the values after the equal sign to the new user and group we made for the website. Change those lines to this: user = site1 group = site1 From user = www-data group = www-data To user = site1 group = site1 Lastly, we want to find the first uncommented line that starts with listen. By default, it will listen to /run/php/php7.1-fpm.sock. However, I like to change this to something easier to remember. In this case, let’s change it to /run/php/site1.sock. listen = /run/php/site1.sock From listen = /run/php/php7.1-fpm.sock To listen = /run/php/site1.sock You are done configuring the PHP 7.1 FPM pool! Everything else should be set up correctly. Feel free to look into configuring PHP FPM further by reading here. Disable Harmful Functions In PHP Securing PHP is very important and there are a few useful guides I will post at the end of the tutorial (or through the comments). The first step I usually take to secure PHP is disabling harmful functions (e.g. functions that will execute commands through the shell). You will need to open the PHP configuration file using a text editor of your choice. The PHP configuration file will be named php.ini and should be located in /etc/php/7.1/fpm/. I will edit the file using Vim by executing the following command: sudo vim /etc/php/7.1/fpm/php.ini We will need to find the line starting with disable_functions. In Vim, you can hit SHIFT + Colon (:), type /disable_functions, and hit enter. It should then go to the line with disable_functions=. Personally, I like using the following: disable_function = apache_child_terminate,apache_setenv,define_syslog_variables,escapeshellarg,escapeshellcmd,eval,fp,fput,ftp_connect,ftp_exec,ftp_get,ftp_login,ftp_nb_fput,ftp_put,ftp_raw,ftp_rawlist,highlight_file,ini_alter,ini_get_all,ini_restore,inject_code,mysql_pconnect,openlog,passthru,php_uname,phpAds_remoteInfo,phpAds_XmlRpc,phpAds_xmlrpcDecode,phpAds_xmlrpcEncode,popen,posix_getpwuid,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_setuid,posix_uname,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec,syslog,system,xmlrpc_entity_decode,parse_ini_file,show_source,exec,pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals, From disable_functions=... To disable_function = apache_child_terminate,apache_setenv,define_syslog_variables,escapeshellarg,escapeshellcmd,eval,fp,fput,ftp_connect,ftp_exec,ftp_get,ftp_login,ftp_nb_fput,ftp_put,ftp_raw,ftp_rawlist,highlight_file,ini_alter,ini_get_all,ini_restore,inject_code,mysql_pconnect,openlog,passthru,php_uname,phpAds_remoteInfo,phpAds_XmlRpc,phpAds_xmlrpcDecode,phpAds_xmlrpcEncode,popen,posix_getpwuid,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,posix_setuid,posix_uname,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec,syslog,system,xmlrpc_entity_decode,parse_ini_file,show_source,exec,pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals, Lastly, restart the PHP 7.1 FPM service for these changes to take effect: sudo systemctl restart php7.1-fpm There you have it! Configuring PHP 7.1 + FPM should be complete! There is additional configuration you can do to PHP such as performance tweaking and so on. I will link useful guides at the end of the tutorial and will most likely make some of my own in the future. Installing Nginx Installing Nginx is simple through the APT package manager. You can execute the following command: sudo apt-get install nginx -y I’d recommend downloading and installing the recommended dependencies if asked. After it is installed, let’s enable the service by executing the following command: sudo systemctl enable nginx This will make it so the Nginx automatically starts up on bootup. It’s that simple, Nginx is now installed! Configuring Nginx Configuring Nginx is simple since most of the settings are tuned correctly from the installation. We will make a new configuration file for site1. Site configs are handled in the /etc/nginx/sites-enabled/ directory. However, what I like doing is configuring the site file in the /etc/nginx/sites-available/ directory and creating a symbolic link from /etc/nginx/sites-available/<siteFile> to /etc/nginx/sites-enabled/<siteFile>. First, let’s change our current directory to /etc/nginx/sites-available/: cd /etc/nginx/sites-available You will not need root access for this since the directory should be readable and executable by all users. Next, let’s copy the default file for our site1 website. You can do this by executing the following command: sudo cp default site1 Some users may prefer to use site1.conf which is fine since that makes things look cleaner. We will need to edit the file using a text editor of your choice. I will open it with Vim: sudo vim site1 The file will look like this: ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php7.0-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php7.0-fpm: # fastcgi_pass unix:/run/php/php7.0-fpm.sock; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Virtual Host configuration for example.com # # You can move that to a different file under sites-available/ and symlink that # to sites-enabled/ to enable it. # #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #} The first lines we will look for are the ones starting with listen. You should see two: listen 80 default_server; listen [::]:80 default_server; We want to remove the default_server from the first line and comment out the second line by adding a # at the beginning because we aren’t using IPv6. It should look like this: listen 80 default_server; #listen [::]:80 default_server; From listen 80 default_server; listen [::]:80 default_server; To listen 80; #listen [::]:80 default_server; The next line you want to look for starts with root. Change this from root /var/www/html to the directory we want to store our website files in. In this case, I want to use the home directory of the new user I created for the website which is /var/www/site1. The line should now look like this: root /var/www/site1; From root /var/www/html; To root /var/www/site1; You will then want to find the line starting with index and append index.php after index. I also remove useless index files such as index.nginx-debian.html. The line should look like this: index index.php index.html index.htm; From index index.html index.htm index.nginx-debian.html; To index index.php index.html index.htm; The next line should be right under the last line. It should start with server_name. You would put the domains of your website. In this case, my website’s domain is moddingcommunity.com. This is how it should look: server_name moddingcommunity.com www.moddingcommunity.com; From server_name _; To server_name moddingcommunity.com www.moddingcommunity.com; Additionally and optionally, you can add access and error logging for the specific website by adding the following two lines: access_log /var/log/nginx/site1_access.log; error_log /var/log/nginx/site1_error.log; You can change the log files if you’d like. You can also comment one out if you do not want to use it. I usually disable the access log on large websites since the file would grow to be very large and keep the error log. Next, we want to enable handling of the PHP files on the website. If you scroll down, you’ll see a bunch of lines commented out that have to do with PHP: # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php7.1-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php7.1-fpm: # fastcgi_pass unix:/run/php/php7.1-fpm.sock; #} Remove the first comment (#) of each line so it looks like this: # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { include snippets/fastcgi-php.conf; # With php7.1-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php7.1-fpm: fastcgi_pass unix:/run/php/php7.1-fpm.sock; } Let’s comment out the following line since we’re listening on a socket instead of an address/port: #fastcgi_pass 127.0.0.1:9000; From fastcgi_pass 127.0.0.1:9000; To #fastcgi_pass 127.0.0.1:9000; We need to change the socket path on the next fastcgi_pass variable to the socket we made back when configuring PHP 7.1 FPM. The line should now look like this: fastcgi_pass unix:/run/php/php/site1.sock; From fastcgi_pass unix:/run/php/php7.1-fpm.sock; To fastcgi_pass unix:/run/php/site1.sock; Now save the file and exit. We will need to create a symbolic link from /etc/nginx/sites-available/site1 to /etc/nginx/sites-enabled/site1. We can do this by executing the following command: ln -s /etc/nginx/sites-available/site1 /etc/nginx/sites-enabled/site1 Files in the /etc/nginx/sites-enabled/ directory are included in the Nginx config. Finally, let’s restart Nginx by executing the following command: sudo systemctl restart nginx You could also use sudo systemctl reload nginx if you don’t want to restart the Nginx server. That’s it! Nginx and PHP 7.1 FPM should be fully configured to work with each other on the specific website you made! Try accessing the website by going to your domain on port 80 (default). Useful Guides PHP security PHP performance tuning PHP FPM performance tuning Nginx performance tuning Conclusion You should have an Nginx server configured to run with PHP 7.1 and FPM through site1. Additional configuration shouldn’t be required but you can always tweak PHP, FPM, and Nginx settings to make your website perform faster and better. Please ensure any files you put in the /var/www/site1/ directory is owned by the site1 user and group. With that said, most website files will only need permissions of 644 while directories should have 755. I will be making more guides in the future including how to set up SSL for your website along with FTP users using VSFTPD. This is my first guide regarding Nginx and PHP. If you see anything I can improve on, please let me know! I tried my best to make the guide clean and easy to read. This guide was written by Christian Deacon (@ Christian ) from January, 2018. If you have any questions, please post a comment! Thanks for reading!
-
A repository that includes common helper functions for writing applications in the DPDK. I will be using this for my future projects in the DPDK. This project includes helpful functions and global variables for developing applications using the DPDK. I am using this for my projects using the DPDK. A majority of this code comes from the l2fwd example from the DPDK's source files, but I rewrote all of the code to learn more from it and I tried adding as many comments as I could explaining what I understand from the code. I also highly organized the code and removed a lot of things I thought were unnecessary in developing my applications. I want to make clear that I am still new to the DPDK. While the helper functions and global variables in this project don't allow for in-depth configuration of the DPDK application, it is useful for general setups such as making packet generator programs or wanting to make a fast packet processing library where you're inspecting and manipulating packets. My main goal is to help other developers with the DPDK along with myself. From what I've experienced, learning the DPDK can be very overwhelming due to the amount of complexity it has. I mean, have you seen their programming documentation/guides here?! I'm just hoping to help other developers learn the DPDK. As time goes on and I learn more about the DPDK, I will add onto this project! My Other Projects Using DPDK Common I have other projects in the pipeline that'll use DPDK Common once I implement a few other things. However, here is the current list. Examples/Tests - A repository I'm using to store examples and tests of the DPDK while I learn it. The Custom Return Structure This project uses a custom return structure for functions returning values (non-void). The name of the structure is dpdkc_ret. struct dpdkc_ret { char *gen_msg; int err_num; int port_id; int rx_id; int tx_id; __u32 data; void *dataptr; }; With that said, the function dpdkc_check_ret(struct dpdkc_ret *ret) checks for an error in the structure and exits the application with debugging information if there is an error found (!= 0). Any data from the functions returning this structure should be stored in the data pointer. You will need to cast when using this data in the application since it is of type void *. Functions Including the src/dpdk_common.h header in a source or another header file will additionally include general header files from the DPDK. With that said, it will allow you to use the following functions which are a part of the DPDK Common project. /** * Initializes a DPDK Common result type and returns it with default values. * * @return The DPDK Common return structure (struct dpdkc_ret) with its default values. **/ struct dpdkc_ret dpdkc_ret_init(); /** * Parses the port mask argument and stores it in the enabled_port_mask global variable. * * @param arg A (const) pointer to the optarg variable from getopt.h. * * @return The DPDK Common return structure (struct dpdkc_ret). The port mask is stored in ret->data. **/ struct dpdkc_ret dpdkc_parse_arg_port_mask(const char *arg); /** * Parses the port pair config argument. * * @param arg A (const) pointer to the optarg variable from getopt.h. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_parse_arg_port_pair_config(const char *arg); /** * Parses the queue number argument and stores it in the global variable(s). * * @param arg A (const) pointer to the optarg variable from getopt.h. * @param rx Whether this is a RX queue count. * @param tx Whether this is a TX queue count. * * @return The DPDK Common return structure (struct dpdkc_ret). The amount of queues is stored in ret->data. **/ struct dpdkc_ret dpdkc_parse_arg_queues(const char *arg, int rx, int tx) /** * Checks the port pair config after initialization. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_check_port_pair_config(void); /** * Checks and prints the status of all running ports. * * @return Void **/ void dpdkc_check_link_status(); /** * Initializes the DPDK application's EAL. * * @param argc The argument count. * @param argv Pointer to arguments array. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_eal_init(int argc, char **argv); /** * Retrieves the amount of ports available. * * @return The DPDK Common return structure (struct dpdkc_ret). Number of available ports are stored inside of ret->data. **/ struct dpdkc_ret dpdkc_get_nb_ports(); /** * Checks all port pairs. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_check_port_pairs(); /** * Checks all ports against port mask. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_ports_are_valid(); /** * Resets all destination ports. * * @return Void **/ void dpdkc_reset_dst_ports(); /** * Populates all destination ports. * * @return Void **/ void dpdkc_populate_dst_ports(); /** * Maps ports and queues to each l-core. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_ports_queues_mapping(); /** * Creates the packet's mbuf pool. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_create_mbuf(); /** * Initializes all ports and RX/TX queues. * * @param promisc If 1, promisc mode is turned on for all ports/devices. * @param rx_queues The amount of RX queues per port (recommend setting to 1). * @param tx_queues The amount of TX queues per port (recommend setting to 1). * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_ports_queues_init(int promisc, int rx_queues, int tx_queues); /** * Check if the number of available ports is above one. * * @return The DPDK Common return structure (struct dpdkc_ret). The amount of available ports is returned in ret->data. **/ struct dpdkc_ret dpdkc_ports_available(); /** * Retrieves the amount of l-cores that are enabled and stores it in nb_lcores variable. * * @return The DPDK Common return structure (struct dpdkc_ret). The amount of available ports is returned in ret->data. **/ struct dpdkc_ret dpdkc_get_available_lcore_count() /** * Launches the DPDK application and waits for all l-cores to exit. * * @param f A pointer to the function to launch on all l-cores when ran. * * @return Void **/ void dpdkc_launch_and_run(void *f); /** * Stops and removes all running ports. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_port_stop_and_remove(); /** * Cleans up the DPDK application's EAL. * * @return The DPDK Common return structure (struct dpdkc_ret). **/ struct dpdkc_ret dpdkc_eal_cleanup(); /** * Checks error from dpdkc_ret structure and prints error along with exits if found. * * @return Void **/ void dpdkc_check_ret(struct dpdkc_ret *ret); The following function(s) are available if USE_HASH_TABLES is defined. /** * Removes the least recently used item from a regular hash table if the table exceeds max entries. * * @param tbl A pointer to the hash table. * @param max_entries The max entries in the table. * * @return 0 on success or -1 on error (failed to delete key from table). **/ int check_and_del_lru_from_hash_table(void *tbl, __u64 max_entries); Global Variables Additionally, there are useful global variables directed towards aspects of the program for the DPDK. However, these are prefixed with the extern tag within the src/dpdk_common.h header file allowing you to use them anywhere else assuming the file is included and the object file built from make is linked. // Variable to use for signals. volatile __u8 quit; // The RX and TX descriptor sizes (using defaults). __u16 nb_rxd = RTE_RX_DESC_DEFAULT; __u16 nb_txd = RTE_TX_DESC_DEFAULT; // The enabled port mask. __u32 enabled_port_mask = 0; // Port pair params array. struct port_pair_params port_pair_params_array[RTE_MAX_ETHPORTS / 2]; // Port pair params pointer. struct port_pair_params *port_pair_params; // The number of port pair parameters. __u16 nb_port_pair_params; // The port config. struct port_conf ports[RTE_MAX_ETHPORTS]; // The amount of RX ports per l-core. unsigned int rx_port_pl = 1; // The amount of TX ports per l-core. unsigned int tx_port_pl = 1; // The amount of RX queues per port. unsigned int rx_queue_pp = 1; // The amount of TX queues per port. unsigned int tx_queue_pp = 1; // The queue's lcore config. struct lcore_port_conf lcore_port_conf[RTE_MAX_LCORE]; // The buffer packet burst. unsigned int packet_burst_size = MAX_PCKT_BURST_DEFAULT; // The ethernet port's config to set. struct rte_eth_conf port_conf = { .rxmode = { .split_hdr_size = 1 }, .rxmode = { .mq_mode = RTE_ETH_MQ_TX_NONE } }; // A pointer to the mbuf_pool for packets. struct rte_mempool *pcktmbuf_pool = NULL; // The current port ID. __u16 port_id = 0; // Number of ports and ports available. __u16 nb_ports = 0; __u16 nb_ports_available = 0; // L-core ID. unsigned int lcore_id = 0; // Number of l-cores. unsigned int nb_lcores = 0; Credits @Christian GitHub Repository & Source Code
-
A small repository I will be using to store my progress and test programs from the DPDK, a kernel bypass library very useful for fast packet processing. The DPDK is perhaps one of the fastest libraries used with network packet processing. This repository uses my DPDK Common project in an effort to make things simpler. WARNING - I am still adding more examples as time goes on and I need to test new functionality/methods. Requirements The DPDK - Intel's Data Plane Development Kit which acts as a kernel bypass library which allows for fast network packet processing (one of the fastest libraries out there for packet processing). The DPDK Common - A project written by me aimed to make my DPDK projects simpler to setup/run. Building The DPDK If you want to build the DPDK using default options, the following should work assuming you have the requirements such as ninja and meson. # Clone the DPDK repository. git clone https://github.com/DPDK/dpdk.git # Change directory. cd dpdk/ # Use meson build. meson build # Change directory to build/. cd build # Run Ninja. ninja # Run Ninja install as root via sudo. sudo ninja install # Link libraries and such. sudo ldconfig All needed header files from the DPDK will be stored inside of /usr/local/include/. You may receive ninja and meson using the following. # Update via `apt`. sudo apt update # Install Python PIP (version 3). sudo apt install python3 python3-pip # Install meson. Pip3 is used because 'apt' has an outdated version of Meson usually. sudo pip3 install meson # Install Ninja. sudo apt install ninja-build Building The Source Files You may use git and make to build the source files inside of this repository. git clone --recursive https://github.com/gamemann/The-DPDK-Examples.git cd The-DPDK-Examples/ make Executables will be built inside of the build/ directory by default. EAL Parameters All DPDK applications in this repository supports DPDK's EAL paramters. These may be found here. This is useful for specifying the amount of l-cores and ports to configure for example. Examples Drop UDP Port 8080 (Tested And Working) In this DPDK application, any packets arriving on UDP destination port 8080 will be dropped. Otherwise, if the packet's ethernet header type is IPv4 or VLAN, it will swap the source/destination MAC and IP addresses along with the UDP source/destination ports then send the packet out the TX path (basically forwarding the packet from where it came). In additional to EAL parameters, the following is available specifically for this application. -p --portmask => The port mask to configure (e.g. 0xFFFF). -P --portmap => The port map to configure (in '(x, y),(b,z)' format). -q --queues => The amount of RX and TX queues to setup per port (default and recommended value is 1). -x --promisc => Whether to enable promiscuous on all enabled ports. -s --stats => If specified, will print real-time packet counter stats to stdout. Here's an example. ./dropudp8080 -l 0-1 -n 1 -- -q 1 -p 0xff -s Simple Layer 3 Forward (Tested And Working) In this DPDK application, a simple routing hash table is created with the key being the destination IP address and the value being the MAC address to forward to. Routes are read from the /etc/l3fwd/routes.txt file in the following format. <ip address> <mac address in xx:xx:xx:xx:xx:xx> The following is an example. 10.50.0.4 ae:21:14:4b:3a:6d 10.50.0.5 d6:45:f3:b1:a4:3d When a packet is processed, we ensure it is an IPv4 or VLAN packet (we offset the packet data by four bytes in this case so we can process the rest of the packet without issues). Afterwards, we perform a lookup with the destination IP being the key on the route hash table. If the lookup is successful, the source MAC address is replaced with the destination MAC address (packets will be going out the same port they arrive since we create a TX buffer and queue) and the destination MAC address is replaced with the MAC address the IP was assigned to from the routes file mentioned above. Otherwise, the packet is dropped and the packet dropped counter is incremented. In additional to EAL parameters, the following is available specifically for this application. -p --portmask => The port mask to configure (e.g. 0xFFFF). -P --portmap => The port map to configure (in '(x, y),(b,z)' format). -q --queues => The amount of RX and TX queues to setup per port (default and recommended value is 1). -x --promisc => Whether to enable promiscuous on all enabled ports. -s --stats => If specified, will print real-time packet counter stats to stdout. Here's an example. ./simple_l3fwd -l 0-1 -n 1 -- -q 1 -p 0xff -s Rate Limit (Tested And Working) In this application, if a source IP equals or exceeds the packets per second or bytes per second specified in the command line, the packets are dropped. Otherwise, the ethernet and IP addresses are swapped along with the TCP/UDP ports and the packet is forwarded back out the TX path. Packet stats are also included with the -s flag. The following command line options are supported. -p --portmask => The port mask to configure (e.g. 0xFFFF). -P --portmap => The port map to configure (in '(x, y),(b,z)' format). -q --queues => The amount of RX and TX queues to setup per port (default and recommended value is 1). -x --promisc => Whether to enable promiscuous on all enabled ports. -s --stats => If specified, will print real-time packet counter stats to stdout. --pps => The packets per second to limit each source IP to. --bps => The bytes per second to limit each source IP to. Here's an example: ./ratelimit -l 0-1 -n 1 -- -q 1 -p 0xff -s NOTE - This application supports LRU recycling via a custom function I made in the DPDK Common project, check_and_del_lru_from_hash_table(). Make sure to define USE_HASH_TABLES before including the DPDK Common header file when using this function. Least Recently Used Test (Tested And Working) This is a small application that implements a manual LRU method for hash tables. For a while I've been trying to get LRU tables to work from these libraries. However, I had zero success in actually getting the table initialized. Therefore, I decided to keep using these libraries instead and implement my own LRU functionality. I basically use the rte_hash_get_key_with_position() function to retrieve the oldest key to delete. However, it appears the new entry is inserted at the position that was most recently deleted so you have to keep incrementing the position value up to the max entries of the table. With that said, once the position value exceeds the maximum table entries, you need to set it back to 0. No command line options are needed, but EAL parameters are still supported. Though, they won't make a difference. Here's an example: ./ratelimit Credits @Christian GitHub Repository & Source Code
-
List of Open Source Software: Peer To Peer: Defined Networking / Slack Nebula: Information: Written In Golang Use Case: best for server-to-server and server-to-network infrastructure GitHub: https://github.com/slackhq/nebula Website: https://www.defined.net/ Tailscale: Information: Uses WireGuard and written In Golang Use Case: best for user/server-to-server and user/server-to-network GitHub: https://github.com/tailscale/tailscale Website: https://tailscale.com/ ZeroTier: Information: Written In C/C++ Use Case: best for user-to-user or user-to-server GitHub: https://github.com/zerotier/ZeroTierOne Website: https://www.zerotier.com/ Nebula REST API: (Management API for Deploying Nebula) GitHub: https://github.com/elestio/nebula-rest-api Headscale: (For Tailscale Self-Hosting) GitHub: https://github.com/juanfont/headscale VPNs: Pritunl: Information: OpenVPN Based and written In Python Use Case: best for user-to-user or user-to-network, and supports high-availability. GitHub: https://github.com/pritunl/pritunl Website: https://pritunl.com/ SoftEther: Use Case: best for user-to-user or user-to-network GitHub: https://github.com/SoftEtherVPN/SoftEtherVPN/ Website: https://www.softether.org/ Tutorials & Information: About Nebula: https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack/ Slack Nebula is production ready with support to saturate 10+Gbps links as tested by Slack in production.
-
Heya everyone! I am going to be showing you how to launch a simple DoS attack to a target host along with how to block the attack on the target's side. Learning the basic concepts of a (D)DoS attack is very beneficial, especially when hosting a modded server, community website/project, or just wanting to get involved in the Cyber Security field. With that said, I do NOT support using this guide and its tools maliciously or as part of a targeted attack. The following guide and tools involved were created for educational purposes. I am also going to show you how to block/drop the attack as well. Firstly, this tutorial requires basic knowledge of Linux and networking. We are also going to be using tools created by myself called Packet Batch (for the DoS attacks) and XDP Firewall (for dropping packets as fast as possible). Packet Batch is a collection of high-performant network traffic generation tools made in C. They utilize very fast network libraries and sockets to generate the most traffic and packets possible depending on the configuration. When you use the tool from multiple sources to the same target, it is considered a (D)DoS attack instead of DoS attack. Network Setup & Prerequisites A Linux server you're installing Packet Batch host. A target host; Should be a server within your LAN and to prevent overloading your router unless if you set bandwidth limits in Packet Batch, should be on the same machine via VMs. The local host's interface name which can be retrieved via ip a or ifconfig. The target host's IP and MAC address so you can bypass your router (not required typically if you want to send packets through your router). I'm personally using a hidden VM that I actually expose in this post, but I don't care for releasing since I only used it to browse Hack Forums lol (I made a hidden VM with the name "Ronny" back a long time ago). Installing Packet Batch Installing Packet Batch isn't too difficult since I provided a Makefile for each version that allows you to execute sudo make && sudo make install to easily build and install the project. The issue here is that we do use third-party libraries such as libyaml and a lot of times those third-party libraries and other's Linux kernel/distro don't play along. I'm testing this on Ubuntu 20.04 (retrieved via cat /etc/*-release) and kernel 5.4.0-122-generic (retrieved via uname -r). ➜ ~ cat /etc/*-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS" NAME="Ubuntu" VERSION="20.04.5 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.5 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal ➜ ~ uname -r 5.4.0-122-generic What Version Of Packet Batch Should I Use? There are three versions of Packet Batch. Versions include Standard, AF_XDP, and the DPDK. In this guide, we're going to be using the Standard version because other versions either require a more recent kernel or the DPDK which is a kernel-bypass library that only supports certain hardware (including the virtio_net driver). Building The Project You can read this section on the Standard repository showing how to build and install the version. There's also a video I made a while back below. # Clone this repository along with its submodules. git clone --recursive https://github.com/Packet-Batch/PB-Standard.git # Install build essentials/tools and needed libaries for LibYAML. sudo apt install build-essential clang autoconf libtool # Change the current working directory to PB-Standard/. cd PB-Standard/ # Make and install (must be ran as root via sudo or root user itself). sudo make sudo make install Launching The DoS Attack Unfortunately, the current version of Packet Batch doesn't support easily configurable parameters when trying to limit the amount of traffic and packets you send. This will be supported in the future, but for today we're going to use some math to determine this. With that said, we're going to be using one CPU thread for this, but if you want to send as much traffic as possible, I'd recommend multiple threads or just leaving it out of the command line which calculates the maximum amount of threads/cores automatically. We will also be launching an attack using the UDP protocol on port 27015 (used for many game servers on the Source Engine). We're going to send 10,000 packets per second to the target host. Our source port will be randomly generated, but you may set it statically if you'd like. The source MAC address is automatically retrieved via system functions on Linux, but you can override this if you'd like. MAC address format is in hexadecimal, "xx:xx:xx:xx:xx:xx". There will be no additional payload as well, UDP data's length will be 0 bytes. Here are the command line options we're going to be using. We're also going to be using the -z flag to allow command-line functionality and overriding the first sequence's values. --interface => The interface to send out of. --time => How many seconds to run the sequence for maximum. --delay => The delay in-between sending packets on each thread. --threads => The amount of threads and sockets to spawn (0 = CPU count). --l4csum => Whether to calculate the layer-4 checksum (TCP, UDP, and ICMP) (0/1). --dstmac => The ethernet destination MAC address to use. --srcip => The source IP. --dstip => The destination IP. --protocol => The protocol to use (TCP, UDP, or ICMP). --l3csum => Whether to calculate the IP header checksum or not (0/1). --udstport => The UDP destination port. Most of the above should be self-explanatory. However, I want to note some other things. Delay This is the delay between sending packets in nanoseconds for each thread. Since we're using one thread, this allows us to precisely calculate without doing additional math. One packet per second = 1e9 (1,000,000,000). Now we must divide the amount of nanoseconds by how many packets we want to send per second. So let's choose 10,000 which results in 100,000 (value => 100000). Layer 3 and 4 Checksums These should be automatically calculated unless if you know what you're doing. We set these to the value 1. Now let's build the command to send from our local host. sudo pcktbatch -z --interface "<iname>" --time 10 --delay 100000 --threads 1 --l3csum 1 --l4csum 1 --dstmac "<dmac>" --srcip "<sip>" --dstip "<dip>" --protocol UDP --udstport 27015 While launching the attack, on the target's server, you can run a packet capture such as the following for Linux. For Windows, you may use Wireshark. tcpdump -i any udp and port 27015 -nne Here is my local LAN environment's command. sudo pcktbatch -z --interface "enp1s0" --time 10 --delay 100000 --threads 1 --l3csum 1 --l4csum 1 --dstmac "52:54:00:c2:8c:e1" --srcip "10.30.40.20" --dstip "10.1.0.58" --protocol UDP --udstport 27015 Please note you can technically use any source IP address, mine in this case is spoofed. As long as you don't have any providers and upstreams with uRPF filtering for example, you shouldn't have an issue with this. Here's our packet dump via tcpdump on the target host ? I'd recommend messing around with settings and you can technically launch many type of attacks using this tool in protocols such as UDP, TCP, and ICMP. It's really beneficial knowing how to do this from a security standpoint so you can test your network filters. Blocking & Dropping The Attack Now that you know how to launch a simple UDP attack, now it's time to figure out how to block the attack. Thankfully, since this is a stateless attack, it is much easier to drop the attack than launch it. However, when we're talking stateful and layer-7 filters, I personally have to say making those are harder than launching complex attacks. Attack Characteristics There are a lot of characteristics of a network packet you may look for using tools such as tcpdump or Wireshark. However, since we've launched a simple stateless attack, it's quite easy to drop these packets. For a LAN setup, this would be fine but for a production server, you have to keep in-mind dropping malicious traffic from a legitimate attack will be harder and you're limited to your NIC's capacity which is typically 1 gbps. 1 gbps is considered very low network capacity which is why it's recommended to use hosting providers that have the fiber and hardware capacities to support up to tbps of bandwidth per second. Let's analyze the traffic and determine what we could drop statically. The source IP since it always stays the same. The UDP length is 0 bytes. Depending on the application, it may not normally send empty UDP packets so you can drop based off of this. The first item above is the best way to drop the traffic. However, many applications also don't send empty UDP packets. There are also other characteristics that may stay static as well such as the IP header's TTL, payload length, and more. However, for now, I'm keeping it simple. Dropping Via IPTables IPTables is a great tool to drop traffic with on Linux. However, there are faster tools such as my XDP Firewall that utilizes the XDP hook within the Linux kernel instead of the hook IPTables utilize (which occurs much later, therefore, slower). The following command would drop any traffic in the INPUT chain which is what we want to use for dropping traffic in this case. We will be dropping by IP as seen below. sudo iptables -A INPUT -s 10.38.40.20 -j DROP You can confirm the rule was created with the following command. iptables -L -n -v You can launch the attack again and watch the pckts and bytes counters increment. Dropping Via XDP Firewall As stated above, XDP Firewall is a tool I made and can drop traffic a lot faster than TC Filter (Traffic Control), NFTables, and IPTables. Please read the above repository on GitHub for building and installing. Afterwards, you may use the following config to drop the attack. /etc/xdpfw/xdpfw.conf interface = "<iname>"; updatetime = 15; filters = ( { enabled = true, action = 0, srcip = "10.38.40.20" } ); You may then run the tool as root via the below. sudo xdpfw Conclusion In this guide we learned how to use Packet Batch's Standard version to launch a simple UDP DoS attack at 10K packets per second and how to drop the traffic via IPTables or using my XDP Firewall tool. I hope this helps anybody getting into network and cyber security ? If you have any questions or feedback, please feel free to post in this thread! Thank you for your time!
-
MTR/Trace Route Tutorial/Guide Created On Feb. 23, 2022 By @Christian Hey everyone, I decided to make a guide on running an MTR or trace route to troubleshoot general networking issues. This guide is mostly focused on running these tools against GFL's infrastructure (e.g. our CS:GO ZE server). However, these tips can easily translate to the Internet as a whole. These tools are needed when a player is having network-related issues on our servers and website. In this guide, we're primarily going to focus on Windows since that's what most of our users use. However, in the video and at the bottom of the written guide, I also provide some information for Linux (which can also be used for Mac users as well since Mac is Unix-based). Video Here's a video I attempted to make on explaining an MTR and trace route. I made this quickly since I was in a rush, but I believe it goes over a lot of valuable information. Since some users don't want to follow along, don't like long videos, or generally understand written guides better than videos (like myself), I made a written guide below as well. What Is A Trace Route? A trace route is a tool/command you use against a specific host name or IP address to see what route you take to get to the destination. One thing to note is when you connect to anything on the Internet, you're going through "hops" to get there. These hops are routers that basically forward each packet/frame to the the next hop until you've reached your destination. What Is An MTR? An MTR stands for "My Trace Route". It was formerly known as "Matt's Trace Route" a long time ago. An MTR is basically a trace route tool, but offers the following changes compared to a trace route: Continuously sends replies until the tool is stopped. Shows packet loss at each hop. Doesn't wait for three replies from each hop. Therefore, the route populates nearly immediately after executing the command. The three changes I mentioned above I consider all pros. Therefore, I highly recommend using an MTR instead of a trace route when troubleshooting networking issues. Running A Trace Route Running a trace route on Windows is fairly simple. To start out, I'd recommend searching for "Command Prompt" in Windows and opening a cmd.exe window (Command Prompt). Alternatively, you may hit the Windows Key + R to display the "Run" box and enter cmd to open a Command Prompt window. From here, you'll want to run tracert <Hostname/IP> where <Hostname/IP> is either the host name of the destination or IP address. In this guide, we're going to be using CS:GO ZE's host name which is goze.gflclan.com. You may also use the IP address which is 216.52.148.47. Here's an example. tracert goze.gflclan.com One thing to note is you will not want to provide a port in the IP/host name (e.g. tracert goze.gflclan.com:27015). This is because by default, trace routes use the ICMP protocol which doesn't include a port non-like the UDP/TCP protocols. There are trace route/MTR tools that allow you to use the TCP/UDP protocols which uses a source/destination port (the mtr command on Linux comes with these options built-in which is discussed at the bottom of this post a bit). However, that is outside the scope of this guide. After running, depending on the route you take to the destination, it will take some time to complete. It takes time to complete because it waits for three ICMP replies from each hop. This isn't what an MTR does (which is one of the features listed above for it). If a hop times out, it's going to take even longer since there's a specific timeout for it. Here are the results from my trace route. Now the information here may be overwhelming to some users. Therefore, I will try to break things down the best I can. Tracing route to goze.gflclan.com [216.52.148.47] over a maximum of 30 hops: This line is basically for visibility/debug. If a host name is specified, it'll attempt to resolve to the IP address associated with the host name and output it in brackets after the host name as seen above. With that said, this also tells us the maximum hop count that is specified in this trace route (which is 30 by default). This means if there are more than 30 hops the route needs to take, it will not include any hops over the 30 mark in our results. Typically, routes should never exceed 30 hops and in most cases where they do, this is usually due to routing loops, etc. Note - Usually the more hops you take, the further away your destination is from you. However, there are many sub-optimal routes on the Internet. Therefore, this isn't always true. Now, let's go over the results themselves which are the following: 1 <1 ms <1 ms <1 ms 10.1.0.1 2 22 ms 18 ms 14 ms cpe-173-174-128-1.satx.res.rr.com [173.174.128.1] 3 32 ms 33 ms 37 ms tge0-0-4.lvoktxad02h.texas.rr.com [24.28.133.245] 4 20 ms 9 ms 18 ms agg20.lvoktxad02r.texas.rr.com [24.175.33.28] 5 12 ms 16 ms 12 ms agg21.snantxvy01r.texas.rr.com [24.175.32.152] 6 35 ms 28 ms 31 ms agg23.dllatxl301r.texas.rr.com [24.175.32.146] 7 25 ms 27 ms 21 ms 66.109.1.216 8 42 ms 23 ms 24 ms 66.109.5.121 9 25 ms 27 ms 26 ms dls-b21-link.telia.net [62.115.156.208] 10 166 ms 39 ms 32 ms kanc-b1-link.telia.net [213.155.130.179] 11 146 ms 83 ms 51 ms chi-b2-link.telia.net [213.155.130.176] 12 39 ms 193 ms 128 ms chi-b2-link.telia.net [62.115.122.195] 13 43 ms 43 ms 48 ms telia-2.e10.router2.chicago.nfoservers.com [64.74.97.253] 14 49 ms 45 ms 46 ms c-216-52-148-47.managed-ded.premium-chicago.nfoservers.com [216.52.148.47] As you can see, we have five columns here. I will explain each column below. The first column indicates the hop number. This is auto-incremented on each hop and starts from one. I'll be using this number to indicate hops below (e.g. hop #x). This is the latency (in milliseconds) of the first ICMP response received back. Basically the timing from when you've sent the packet to when you've received it. Typically, the lower the latency is, the better. If there was no response (e.g. a timeout), it will output an asterisk (*) instead of the latency. Similar to the second column, this is the latency of the second ICMP response received back. Similar to the second column, this is the latency of the third ICMP response received back. This shows the IP address and/or the host name of the hop. I believe the trace route tool performs a rDNS lookup on the IP address to see if there's a host name associated with it on record. If there is, it will display the host name and then the IP address in brackets. One thing to note is when you look at the host names, usually they give an indication of where the hop is located. For example, in my route we see hop #4 that has the host name agg20.lvoktxad02r.texas.rr.com. I believe this host name indicates the hop is located in Live Oak, TX, which is the current city I'm living in (which is technically inside of San Antonio, TX). The next hop (#5) has a host name of agg21.snantxvy01r.texas.rr.com and I believe this indicates the hop is in San Antonio, TX. I know for sure hops #6 to #9 are all located in Dallas, TX. You can usually confirm by looking at the latency you get to the hop as well (e.g. I'm only getting 12ms latency to the San Antonio hop which would make sense since I'm located in San Antonio, TX). Anyways, we can see towards the end that we start routing to Telia in Chicago based off of the host name (e.g. telia-2.e10.router2.chicago.nfoservers.com) . NFO has a router with Telia which is hop #13 and hop #14 is our actual destination (the NFO machine that hosts our CS:GO ZE server). If NFO or Internap (the data center NFO has their machines hosted in) blocks your IP, you will start seeing timeouts after hop #12 more than likely since you start getting blocked when trying to route into the NFO network. It will output an asterisk (*) in-place for the latency when there's a timeout. This is why we ask users who aren't able to connect to our CS:GO ZE server for trace routes to the server. In cases like these, this is usually due to the player's network performing port scans against NFO's network. Unfortunately, this usually indicates the player's computer or another device on the network is compromised (most likely) or the router is configured to perform port scans against networks (least likely). I believe that's all you need to know for a trace route. Our Technical Team will be able to assist you with the results if you need any clarification or help on them. Running An MTR Now it's time to learn about running an MTR. As mentioned before, an MTR is similar to a trace route, but includes some pros in my opinion. To my knowledge, Windows doesn't include an MTR tool by default. Therefore, I'd recommend using a third-party tool named WinMTR. This tool comes with a GUI making it more user-friendly. After installing, you may run either the 32-bit or 64-bit versions (I'd suggest just using 64-bit since you're most likely running 64-bit nowadays). Just like a trace route, you may specify a host name or IP address under the "Host" field. Afterwards, feel free to hit "Start". Here are my results. You'll notice that the route populates nearly instantly non-like a trace route. This is because it's not waiting for three replies from each hop. With that said, it also is continuously sending ICMP requests until you hit "Stop". There are a few new columns when using this tool compared to a trace route. The most important column is "Lost %" which indicates how much packet loss we're getting to that specific hop. Now, the closer we are to 0%, the better. The percentage indicates how many requests we've sent that didn't get a reply back. One big thing to note about packet loss to each hop is some hops do rate-limit ICMP responses or have them turned off entirely (meaning you'll have 100% loss to that specific hop). So if you see packet loss on a hop, but all hops after that still display 0% packet loss, this is most likely the reasoning and this is also nothing to worry about. If you see a hop with packet loss and after that hop, each other hop going all the way down to the destination has packet loss, that's when you know you're experiencing actual packet loss and the first hop experiencing the packet loss is probably the one dropping the packets (this isn't always true though). In the above example, I have packet loss on some hops due to rate-limiting. However, you can see hop #14 (the destination), doesn't have any packet loss indicating there aren't any dropped packets to the destination itself. This was actually the results from what I did in the video above where I sent requests every 0.2 seconds instead of every second (I didn't have any packet loss when sending a request each second, but due to rate-limiting, I did have packet loss on some hops when sending a request every 0.2 seconds). Other than that, the rest of the new columns are related to latency to each hop. Since we are continuously sending requests and measuring the latency for each response, this allowed for more columns such as "Best" (which is the lowest latency to the hop), "Avrg" (which is the average latency to the hop), "Worst" (which is the worst latency to the hop), and "Last" (which is the latency of the latest request sent out and received to the hop). The "Sent" and "Recv" columns indicate how many ICMP requests we've sent to the hop and received. The packet loss column's ratio is (Packets Sent - Packets Received) / Packets Sent. For outputting MTR results, you can either take a screenshot, use the "Copy Text to clipboard" button, or "Copy HTML to clipboard" buttons. You may also use the export buttons as well to output to a file. Typically, I'd suggest just using "Copy text to clipboard" and pasting the results. Additionally, you may also hit the "Options" button and it'll show a box like this. The interval indicates the time in-between sending ICMP requests. The lower this number is, the more requests you'll send obviously. However, as explained above, lower values will lead to more rate-limiting on certain hops, etc. Usually there's no reason to change this from one second. The ping size is the ICMP packet length in bytes. This is something you won't typically need to change, but if you want to send bigger packets, you may change this to anything under your MTU limit (I don't believe this would support fragmentation, so you'll need to have it under your MTU limit). I believe LRU (least-recently-used) indicates how many hosts it can store in one route (similar to the max hop count in trace routes). I'd suggest just leaving this to default (128). And the "Resolve names" box indicates whether to do rDNS lookups on the IP address to get a host name for the specific hop. In WinMTR, when a host name is found via rDNS lookup and "Resolve names" is checked, it does not display the IP like a trace route does. Sometimes rDNS lookups are inaccurate. Therefore, this would be a good reason to disable resolving host names in certain cases where needed. I believe that's about all you need to know for an MTR. Linux/Mac You can execute an trace route or MTR on Linux/Unix-based systems (e.g. Mac) usually by installing the correct packages. I know for Debian-based systems such as Ubuntu, you can install these using apt (e.g. apt install mtr). On most distros, these tools are included by default. However, for minimal installations, you may need to install these packages manually. I personally like using MTR on Linux due to all the options it comes with along with it just being a lot better than WinMTR. I'd suggest executing man mtr on your Linux OS to see what options you have. For example, take a look at this for Ubuntu (or run man mtr in your Linux terminal). You'll see many more options such as being able to perform MTRs over the UDP/TCP protocols (e.g. mtr --udp <host> for UDP-based MTRs or mtr --tcp <host> for TCP-based MTRs). Sometimes this is useful when a hop completely disables ICMP replies and you want to see if you can get a response from the hop using the UDP/TCP protocols instead. Conclusion I understand this is a pretty long post and it's pretty in-depth as well, but I hope this does educate some of you on how to use a trace route or MTR along with what they're good for. As always, if you need any help with running these tools or inspecting the results, you may reach out by replying to the thread or talking to me via DM! If you see anything inaccurate in this post, please let me know! I'm always willing to learn new things and while I'm pretty sure everything is pretty accurate, there's a chance I missed something. Thank you for reading! Original Source
-
Linux Security & Hardening Created on March 3rd, 2019 By @Christian Hello, In this guide, I will be going over how to secure a stock Linux machine. With that said, these are methods I use on Ubuntu/Debian-based systems. Therefore, some commands may need to be altered depending on the Linux distro you’re using. Keep Your System Up-To-Date When logging into the server for the first time, ensure to keep the system up-to-date with kernel updates, etc. Try to keep your machines up-to-date as much as possible but before upgrading, ensure it isn’t during a live period because it will require a restart more than likely. With that said, having a recovery option ready is recommended just in-case it locks you out of the system somehow after a reboot. You can run the following commands on Ubuntu/Debian as root to update and upgrade the system: apt-get update apt-get upgrade apt-get dist-upgrade Disable Root Login When you’ve made your own user with root and gave it sudo access (you can add sudo users by running visudo as root in the terminal), I’d strongly suggest disabling the root login in general. You can do so by editing /etc/ssh/sshd_config and setting PermitRootLogin to no. Afterwards, ensure to restart the SSH server by running the following command as root: systemctl restart sshd Change SSH Port Although hackers can scan your IP for open ports, changing your SSH port usually adds an extra step for a hacker. Changing the SSH’s port is easy to do. Edit the /etc/ssh/sshd_config file and set Port to whatever you want the SSH port to be (e.g. 4444). Afterwards, ensure to restart the SSH server by running the following command as root: systemctl restart sshd Please remember when you next SSH into the server, you will need to specify the new port when connecting. You can do this by specifying the -p flag followed by the port you want to use to connect. For example. ssh [email protected] -p <portNum> Use SSH Keys SSH keys are easy to generate and generally more secure. They also allow you to log into multiple servers easier if you’re using one key to log into all servers. For example, you’d only need to know the key’s passphrase to log into each server. By default, RSA SSH keys are 2048 bits. In my opinion, 2048-bit RSA keys are not strong enough. Therefore, if you want to increase the bits, you can specify the -b flag and follow it with the number of bits you want the key to be. Personally, I’d suggest generating SSH keys with 4096 bits. You can generate an SSH key using RSA with the following. ssh-keygen -t rsa -C “Just a comment.” If you want to generate an SSH key with 4096 bits, you can use the following. ssh-keygen -t rsa -b 4096 -C “Just a securer key.” In recent years, many have started recommending using ed25519 keys. I personally recommend this method. You cannot use the -b flag with this. However, I've heard these are typically more secure than RSA keys anyways. If you want to create an SSH key using ed25519, please use the following. ssh-keygen -t ed25519 -C "Just a comment." After generating the key on your local terminal (or the server you’re using to connect to your remote server), you will want to copy the contents of the public key (most likely with cat ~/.ssh/id_rsa.pub). On the remote server, you’ll want to create a directory named .ssh/ in the user’s home directory. Give this directory a permission of 700 (only user has read, write, and execute). Afterwards, create a file named authorized_keys in the .ssh/ directory. Paste the contents of the public SSH key into this file. You can add multiple SSH keys to this file (one per line). Afterwards, give the file a permission of (0)600 (only user has read and write access). Ensure both the directory and file is owned by the appropriate user. Here are the commands I normally use to create the .ssh/ directory and authorized_keys file and assign them the appropriate permissions along with ownership. mkdir .ssh touch .ssh/authorized_keys chmod 700 .ssh/ chmod 600 .ssh/authorized_keys chown -R user:usergroup .ssh/ Finally, I recommend editing the /etc/ssh/sshd_config file and enabling public key authentication through there. In most systems, I believe it’s enabled by default but I like uncommenting out the lines anyways. Here’s an example. PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys Afterwards, restart the SSH server by executing the following command as root. systemctl restart sshd Try logging into the server. If it still prompts you for a password instead of using the SSH key authentication, I’d recommend adding the -v flag to the SSH command to get a verbose output and see what the issue is. For example. ssh -v [email protected] Additionally, you can use multiple private keys on one local terminal. However, you’ll have to specify which key to use when either connecting with the SSH command or with a config file. In the SSH command, you can specify the -i flag and follow it with the path to the private key you want to use. For example. ssh -i /home/christian/.ssh/secondkey [email protected] Or you can specify which key to use for certain IPs/domains in a config file. Make a config file in your local terminal’s .ssh/ directory (~/.ssh/config). Afterwards, you can use the following format to specify a certain private key per host. Host <IP/Domain> IdentityFile /path/to/keyfile For example. Host myhost.internal IdentityFile /home/christian/.ssh/secondkey Disable Password Authentication In my opinion, if you’re using SSH keys, you should just disable password authentication overall. You can do this by editing the /etc/ssh/sshd_config file and changing PasswordAuthentication to no. Please ensure your SSH key authentication works first (preferably with sudo access to at least one user) before doing this. If not, you may get locked out if your SSH key doesn’t work and you’ll have to boot the machine into recovery mode to enable password authentication again or fix your SSH key issue unless if you have KVM access, of course. Afterwards, restart the SSH server by executing the following command as root. systemctl restart sshd If disabling password authentication overall is not an option (e.g. if you need at least one user to connect to the server via SSH and password authentication), just ensure to delete passwords on users not needed. I would highly suggest doing so to at least users with sudo access. You can delete user’s passwords by executing the following command as root. passwd -d <user> Allowed Users I would also suggest only allowing certain users to login via SSH. You can do this by editing the /etc/ssh/sshd_config file and specifying the AllowUsers config option followed by a list of users allowed to login (one user per space). For example, User1 User2 User3 and so on. Full example here. AllowUsers “christian user1 user3” Afterwards, restart the SSH server by executing the following command as root. systemctl restart sshd IPTables The next big step in securing your machine is using IPTables. On Ubuntu, in order to have IPTables save, I have to install the iptables-persistent package. You can do so by executing the following command as root and choosing your options. apt-get install iptables-persistent I won’t get into the specifics of IPTables (going to leave that for another guide), but I would strongly recommend only white-listing the services the server will use and dropping all other traffic. After white-listing the necessary ports and IPs, set the chain’s policy to DROP (which will drop all traffic by default if no rules are matched against the packet). Afterwards, you can save IPTables by executing the following command as root. netfilter-persistent save After you’re done configuring IPTables, I’d strongly recommend making a backup as well which can be done by the following command as root. iptables-save > /path/to/file Additionally, you will most likely be white-listing the SSH port (which is 22 by default). With SSH keys, the security is already good but if you want to take it one step further, you can white-list only certain IPs for SSH access. This obviously comes with penalties as well (e.g. you will need to have the white-listed IP to access SSH). With that said, if you go with this option, ensure you have a host (e.g. a VPS or a dedicated machine) that has a static IP. In the case that your home IP address changes, you’ll be able to SSH as this host and white-list your new home IP. [O] WAFs/OpenVPN Having your machines under a WAF (Web Application Firewall) such as pfSense, Sophos, or IPFire will certainly help the security and give you more flexibility with firewall/NAT rules. Keep in mind, this can add some network overhead to your machines and if the firewall goes down, all the machines under it will likely go down. For example, with game servers, I don’t recommend having a WAF because it’ll add possible latency overhead (latency is very important for game servers). For websites, some latency overhead won’t be a big deal and the extra security features is definitely worth it. Some WAFs also come with a built-in OpenVPN server (or another type of VPN server) which can further secure machines under the firewall (e.g. only white-list VPN IPs to certain services on the machines under the WAF such as SSH). [O] IDS/IPS IPS stands for Intrusion Protection System and IDS stands for Intrusion Detection System. You can read the differences here. Most WAFs come with an IPS such as Snort. This will help prevent malicious attacks by inspecting the packets that come through the WAF. IDSs are installed onto the machines themselves and they don’t necessarily prevent malicious attacks. However, they will log any suspicious activity to system admins. They monitor logs and configs. Conclusion These are things I keep in mind when setting up my own machines and firewalls. As I learn new things involving security, I will add them to this article. I would like to state that most breaches and security leaks are an inside job. Therefore, make sure to watch who you give access to. With that said, keep in mind that the security of the software you run is equally as important as the above. For example, if you’re running a web server with PHP, ensure you harden the security there as well. If you have any feedback or anything I can add/do better, please let me know! I’m still learning a lot but I just wanted to share things that I already know. Thank you for reading! Original Source
-
I have created a guide in GitHub for converting player models into colorable player models https://github.com/Kurante2801/gmod-colorable-playermodels/wiki A colorable player model is a model with specific textures that change color depeding on the player's color. For example having a T-shirt that changes color but having the hair color remain the same. This is seen better in sandbox, where the player can change their color at will.
- 1 reply
-
- 1