Good Deals

I’m an Apple guy. This started when my dad was in grad school, the lab had Macs. I got to spend an hour or two when I was super young playing with MacPaint. I thought this was great, but didn’t appreciate the significance until much later. The important part here is that my dad got to like Apple gear, Macs in particular. Quite some time later the first family computer was an old Mac Plus. After that came a Performa. When I went away to college I ordered an iBook.

That iBook. Man, I abused that computer. I learned so very, very much about having and supporting myself with it. I had no support structure, the few people I was close with all had Windows machines. One friend was into Macs, but I only talked to him over AIM. Which meant I had to have a working computer and network connection. This was not often the case, since I was perpetually installing, re-installing, re-installing, un-installing all kinds of things. In addition to the basics of computergeekery that I picked up, there was also the freedom. At the time it was a slow machine, wireless wasn’t really a thing, and it had nearly no storage, but it was portable. It even had a handle. I lived from that machine for almost two years. It went everywhere with me. Across campus to print something, home on the weekends, across country on vacation. Once I decided on a major, I needed to upgrade. Of course I picked another Mac, upgrading to a PowerMac and handing the iBook down. It was a revolution to me to be able to move my entire computing world with me from place to place. In retrospect, it was categorically not a good deal. After taxes it was over $1800. Nevermind the specs, because they’re less than an original iPod touch, but for the time it was expensive. Looking back it was a terrific waste of cash. I don’t regret the purchase, because if nothing else it began teaching me how to support my equipment with only the internet.

There are a lot of things I do not miss about that computer, but from the day I moved on, I missed being able to have my Mac with me. Along the way, I kept trying other things to find a portable machine that was super light, super mobile, and had what I wanted. I tried a few Dells, from the weird and compromised Dell Latitude X300, I briefly had an e4300, and lastly with a frankensteined e4310. They were good enough computers, and for the little bit of money I paid for them, nice. But they were Dells, and only ran Windows or Linux. The X300 was an early “ultrabook” which translated to it was thin because it had no optical drive. The X300 was released in 2003. I bought mine around 2009. I wasn’t able to find a comparable model for the price in the 5 years after that was as small and light. Latitudes as a group are great workhorse computers. They’re easy to fix, there’s tons of parts, meh. They’re boring and have shitty keyboards too.

Where is all this leading? I got a new laptop last year. And, for the first time since 2001, it’s an Apple laptop. That is mine. It was a good deal too; 2 year old MacBook Air 11″. For $350. That’s at the top end of “good deal”, edging towards “great deal”. It’s tiny, has an SSD, so its fast, and it works. Well, it works now. I had barely gotten it up and running, wiped the drive, registered for the Yosemite beta, gotten it installed. Then I opened it, thrilled to have MY laptop running an OS X beta , finally I can help Apple find bugs, and nothing happened. I plugged it in. No lights lit up on the MagSafe adapter. Oh. Goody.

So I did what you’re supposed to do. I made an appointment and took it to the Genius Bar. They did not have good news. They could replace the Logic Board, for something like $500, or they could send it to the depot. The depot has a flat repair charge, $300, they send it back working. I opted for the depot. I spent all my “loose” money buying the damn thing, I can’t afford to trust that its simply the logic board. In no way could I spend five hundred dollars on this. A few days later they called, it was back and working. I picked it up, I paid my fee. For those of you playing the home game, my cheap MacBook has now cost me $650. At the time that was $70 less than a refurbished 2014 model. And if only the story ended there.

The night after I picked it up, I sat down on the couch to finally enjoy the freedom to surf and watch TV. I opened it and the display was dark. Going through common troubleshooting steps I found it was working, charging was fine, external video was fine. So I checked the display with a flashlight. Dead backlight. By shining a bright flashlight near the display I could see it was getting signal, but there was nothing lighting it. Back to the Genius Bar. This time I learned my favorite bit of Genius jargon, “looper”. Since it was a repeat-offender, all the repairs are on Apple. They replaced the entire top half of the laptop for free. I’ve got the receipt, bottom line reads, “amount due: $0.00”. That was a great day. Too bad I was back less than a week later. By this time half the staff of the Genius Bar knew me on site. They tried to help, told me to ask for a replacement because I have, “no confidence”, in that particular machine. Thankfully that wasn’t necessary. They replaced the display assembly(lid) again AND the logic board. This adds up to nearly two computers worth of parts I’ve gotten for my $300 depot repair investment. Which isn’t bad. It’s still not a good deal, but I’ve got my own working laptop, legitimately running OS X.

I’ve returned to the days when I can just grab a bag and go out the door, trusting that I can solve any problem with what I’ve got on me. The bag is a lot lighter now, too.

Headless Kali

Since I am space and RAM limited on my laptop I decided to make a headless Kali virtual machine to keep around for playing with. Since I couldn’t find a reliable tutorial for removing all the GUI stuff from a normal Kali install, I decided to create a Debian-turned-Kali machine. Currently the goal is to only use this for command line tools.

First step, install and update a minimal Debian Wheezy(7.0) machine. Mine has only SSH installed from the start.

Next, add the Kali software repositories, and update. This is where i hit my first snag, as well as my first triumph. I’m doing this to learn, after all.

  • begin by adding the following lines to the /etc/apt/sources.list file

  • deb kali main non-free contrib
    deb kali/updates main contrib non-free

  • run # apt-get update to pull the latest info. Here is where I hit my snag. I got the following warningPubKeyError copy
  • This is a missing public key for the Kali Repos. I can still pull down and install software, but it will be doing so unauthenticated. Thanks to the public-ness of PKI, this is an easy fix, once I learned a little about what I was doing.

  • Pulling this key is simple enough, # gpg --recv-keys ED444FF07D8D0BF6PubKeyFix1 copy
  • Simply getting the key is not enough, you must tell apt to use it. # gpg -a --export ED444F07D8D0BF6 | apt-key add - this will return OK, and allow apt-get update to run without any further warnings.

Now I can install any Kali tool I’d like, and run them remotely through a headless VM. How can I run something headless? EASY, both my favorite Virtual Machine managers, Virtualbox and VMWare Fusion provide LOTS of command line tools for interacting with their software.

  • for VMWare its simply $ /Applications/VMware\ -T fusion start "/path/to/vm.vmx" nogui
  • and for VirtualBox its $ vboxheadless -startvm VMNAME

    Next time I visit this topic it’s likely to be “how to run remote GUI tools from a headless Kali VM”, when I find a need for a GUI tool on this machine.

  • PS to remember Part 2

    Building Directories with PowerShell

    I needed to create a script that will build out directories following a pattern. In this case, Microsoft Security Updates. Perfect opportunity to practice a little coding. The script needs to be functioning in an all Windows environment, and easy to maintain. Sadly, this eliminated Python, which is more interesting to me, but PowerShell is a very close second. And despite my desire to not be a Windows guy, the almighty paycheck comes from a company that builds a product that runs exclusively on Windows. So away we go(sanitized where appropriate).

    # get current year from system and format for directories
    $year = get-date -format yyyy
    $year_short = get-date -f yy
    $month = get-date -format MMMM
    $month_short = get-date -format MM

    # constants; current month name and prefix
    $parent_dir = "$month_short - $month"
    $prefix = "\MS$year_short-"

    # actual, live working directory
    $checkifdir = "\\servername.domain\software\patches\$year\$parent_dir"
    # testing/debugging directory
    #$checkifdir = "C:\Users\wkopp\Desktop\temp\sandbox\$year\$parent_dir"

    # check if current month directory exists, if not, create (w/loop)
    # this happens without user input

    if ($checkifdir -eq $false){
    md $checkifdir

    # ask user for range of this month's bulletins(user input)
    # create the names for directories needed.
    # create directories

    $st = Read-Host "Please enter the number of the first bulletin: "
    $end = Read-Host "Please enter the number of the last bulletin: "

    $bulletin_range = $st..$end

    for ($i = 0; $i -lt $bulletin_range.length; $i++){
    $string_name = $prefix + $bulletin_range[$i].ToString("000")
    $folder = $checkifdir + $string_name

    Write-Host $folder

    if ($folder -ne $false){
    md $folder

    Gear Intro

    “…and I said, that’s good! One less thing.” -Forrest Gump

    This is the start of a new category here, GEAR. Hopefully someone else finds it useful. I will surely be referencing it when necessary.

    Recently I’ve been reading about the idea of “Buy it for Life”, where you find a product, or line, or brand that solves the problems you have. I’m not so sure that there is much I can buy that will satisfy my needs for life, but I do need to track the stuff I buy that’s super high quality or great for what I need. Recently I’ve found two home runs in that department. The first, a new laptop will get it’s own post later, because there is quite a story to match.

    The second is something I’ve been looking for since high school, and that’s a notebook with thin, strong sheets that don’t bleed with most ink-ball or fountain pens. I received a notebook like this as a gift in high school, and I promptly filled it with notes, drawings, scribbles, and even some paintings. It was amazing. Since then I’ve been searching for its replacement, hopefully in bulk. While binge reading the wonderful Cool Tools site, I found a “what’s in your bag” article that listed Muji brand notebooks. There wasn’t any other description of the type or quality, and reading reviews didn’t shed much more light. However, my work notebook was quickly running out of pages, so I took a shot with the MUJI Blank Notebook a Book(Japanese Tankoubon) Size Unruled 184sheets(fair warning, amazon affiliate link).

    Just, WOW. Going in with no expectations, this fit all my needs. It’s small, about 5.25″ x 7.25″, has plain UNRULED sheets, plain covers, and amazing, wonderful, silky smooth paper that didn’t bleed with ANY of the pens I tried with it. Pilot V5? Nope. Uniball Signo? Nope. Fountain Pens? Nope. It might not work for many other people, but for me this is the best notebook I’ve had in a long, long time.


    I’m always looking for a better way to do things. I’ve spent hours, days, weeks, months trying to learn how to do things the most effective way possible. This often means deluding myself that there’s a way around hard work. That’s part of the impetus of this blog. It is here until I stop paying for the hosting. Staring at me. Every time I see it, I see the goal that I set for myself, *write more*. There’s not an easy way around this, there’s no shortcut key or macro, I have to do the work.

    The hardest part is starting. When I’m trying to do something I’ve never done before, I can, and have, gotten lost. I’ve never written a blog, or done much writing since the 8th grade. So I look for shortcuts. For optimizations, for fun things that help me pretend that I am moving forward. But there aren’t any. The best advice about writing is, “write more”.

    I had never started a new career, but I had had a few different jobs that could have become careers. Five years ago I rejected them and moved in a new direction. I was working in a job that I had gotten because I had “Photoshop Skills” on my resume, and my interviewers all were impressed with my communication. I told them in plain language what I had been doing, how I felt I had progressed with it, and what I could and could not do. The job combined a lot of project planning and implementation, and some prepress work. This is a fancy word that describes altering someone else’s artwork to get it to print the way they like or expect. For awhile this was fun.

    When I found myself spending all my free time installing Ubuntu or FreeBSD, my reading was blogs or books about shell scripting or programming, then I felt it was time to move. I looked back at what held my attention consistently since high school. “Computers” was the simple answer, but I had a job with “computers”. On that front I couldn’t be happier. I spent every week day in a nice office, with free coffee, working on a brand new *Mac*. High School and College me couldn’t be happier. Future me was upset though. Future me didn’t want this. So I researched. I spent days and months searching the internet to learn where I could go with this. Network Engineer, Network Admin, HelpDesk, IT Support, SysAdmin, Operations Engineer, I applied to them all.

    There were no shortcuts here. I knew I wanted to be an IT professional. I knew it would take years to get the understanding and experience I needed for this to be a career. I had success. Not immediately, and it took hard work. I had a job that I didn’t like, and it did not prepare me, so I studied, I practiced, I worked hard to get a job as “IT Support”. Matching the adage that titles mean nothing, this job was amazing. It was hard work every day. I had to learn EVERYTHING. Active Directory, IIS, SQL Server, Apache, DNS, cable routing, hardware installation, troubleshooting, user support, business continuity, everything was new. I had arrived. I was an IT professional, a sysadmin to be specific. And it was still not enough.

    Now all my down time I was learning about Information Security. I had learned that this was a thing during my previous job search, and maintained a few contacts, attended a local user group. This taught me to have a new dream. Information security was sysadmin++. You have to know systems, networks, software, people, businesses, and all their interactions. This became my goal. This continues to be my goal. I’m at a different job now, with a title that has “application security” in it, but I know how little I know, and how quickly the space is changing.

    Hard work continues. I am an information Security professional. I know what the security concerns are of our product. I work hard to keep learning. I work hard to get better. Sometimes I forget that effort is the most effective way to do something. Sometimes I keep hoping there’s a quick blog post I can read that will unlock the next door. There isn’t. There are distractions. There are obstacles. Sometimes they might help, but they’ll never move me forward the same way hard work will.

    Ohio InfoSec Forum 2014

    Somehow through twitter last year I found out there’s an active infosec group My home town, Dayton. Every year they have an anniversary con, one room, one track, and great speakers. Last year my favorite talk was about the Kali Linux project by Martin Bos. I hadn’t seen anyone discussing much other than the official website, so it was great to hear more details about the transition from Backtrack to Kali and the goals of the project.

    Following the group in the last year, I found that this year’s anniversary fell on a weekend I was planning to head to Dayton with my family anyhow. Between a flexible wife and parents and a job that’s happy to let me set my own schedule, I got a Friday off for family time and a great Saturday of learning and networking. This years lineup had some familiar faces and some new ones, at least to me.

    Dave Kennedy of TrustedSec opened with a great talk about awareness initiatives, things he’d seen succeed, things he had proven were failures, and ideas to move forward. Like all of his previous talks I’ve seen, Dave showed off his Java Applet attack in SET, which was honestly distracting to the greater message. Security program awareness and success is directly tied to discussions and interactions with the users, not the technical controls put in place. Dave explained how in one of his past positions he started outreach programs from the security and technical staff to the rest of the users. They explained media stories, answered questions, fixed personal laptops, basically taking any opportunity to help people understand what risks exist and how to make an intelligent decision about them.

    Jerod Brennan is security consultant at Jacadis and deals with assessing customer’s environments, websites, and applications for security flaws. In this role he’s been analyzing mobile applications, both iOS and Android and found some alarming security flaws. He opened explaining that during penetration tests, mobile applications had not often been in scope, but as they started to grow in popularity, they’ve become a great target to help identify security problems in an organization. The problems identified in the past ranged from information disclosure to third parties having access to customer data.

    Between stories of security problems he’s seen in the wild, Jerod discussed how to retrieve the application bundle and analyze the app itself. Both iOS and Android deploy apps in a zipped container, and inside that container are text files that can be scanned for some common words or phrases to begin to understand that the app is doing. Looking for things like “http://” or “password” often yield valuable information. Other dangerous security problems he has seen in the wild were things like including .dlls in an iOS app bundle. These were easily reversed to get the raw source code that provided valuable information. Problems like this often arise from using a cross platform development environment, lowest bidder contractors, or just laziness about security.

    The most damning problem that Jerod had seen in the wild was where a client’s app had been developed by an outsourced developer. This developer had written a part of the application to contact his personal environment, in addition to the client’s, when it was connecting. Jerod didn’t disclose what information was being sent or retrieved, but he emphasized the security concern at play. If a malicious entity wanted to compromise the client’s app, they no longer have to deal directly with the client’s environment. This loophole in their mobile app has the potential to allow attackers to compromise the developer’s environment, and pivot from their into the client’s system.

    The takeaway message was the same as many security talks, validate your assumptions, and verify your security. Even if you have a mobile app developed wholly in-house, it must be built with security in mind. Discussing, developing and testing security is the only way to be sure that you’re defending your organization and your customers, and the related data.

    For a conference that only cost a $10 donation, breakfast and lunch were provided, which put everyone in the same room with no real goal, to allow for conversation. I met a couple gentlemen from a local managed services company who had never attended a security group before, they were getting great value of things they could bring back to build their business. At the same time I spent a few minutes talking to the organizers, who all worked from different companies ranging from Jacadis, to Rapid7, to an unnamed Department of Defense contractor(Dayton is in close proximity to Wright Patterson Air Force Base, which employs a large number of civilian contractors).

    Deral Heiland is by day a penetration tester for Rapid7, and by night a guy that “googles how to code”. The combination of these two things is his application Praeda. Named for the Latin word for spoils or booty, Praeda is an application that will scan a network segment or IP for a device in its list. If it finds a matching device it will attempt to login with the vendor’s default credentials and extract/read/download any sensitive information.

    Traditionally during network penetration tests, this sort of thing had been a last minute maneuver, just to show a few more basic vulnerabilities at the end of a test. When Deral first got his application working, he ran it and harvested enough information to get into much of the infrastructure that did not have default credentials, but did have sensitive information shared to less significant devices. What does this mean? That things like IP cameras, multi-function printers, or similar can hold and repeat a serious amount of potentially dangerous information. The result of this little demonstration was that now this is one of the first applications that Deral and his team run when they’re in a new environment.

    Around this point, someone in the audience asked about how he discovered the exploits used in this tool. Deral gave half of a laugh and explained that there is no real exploit here. His tool is using intended functionality; that of a restricted portal or settings page. The problem is that it has published defaults that were never changed. Deral’s point, and one of the most damning problem with device or software security, is that of shipping with default credentials. In this particular case, Deral’s tool has found devices using default credentials that somehow have significant information about the company that owns them.

    The talks ended with a great discussion of passwords by Tom Webster. His talk didn’t present anything particularly new, but reinforced a lot of debate that has been occurring lately, which are more secure, complex passwords, or long, simple passphrases? Security as a thing has generally encouraged complexity, but this fails the user in that it is very difficult for the human to remember, and technically easier for software to break.

    At the end of the day(after the cake) I looked around and was shocked that the room was not full. Here was a day of great content, great discussion, and great networking for nearly free. I felt like this event was a great example of the giving nature of the InfoSec community, one that continues to surprise me every day. The organizers KNOW there are smart people around, they know these people have stories to tell that will help us all get better. So they’re doing what they can to make that accessible. Being on a Saturday means that people didn’t have to take off work. Being $10 means pretty much anyone can afford it. Being open to the public means anyone can come in. It’s pretty awesome to walk into a room of complete strangers on the other side of the state, and get welcomed like a regular. I hope I get another chance to attend or speak at an Ohio InfoSec Forum event.

    A gesture

    The reason Apple nearly ignored the Mac Mini is control. I nearly missed the revolution of gestures on the desktop because my only Mac was a Mini with a mouse. With the MacBooks, and in recent years, the iMacs, the default input has changed from a mouse to a trackpad. For a lot of people, Apple invented the mouse, how could they take it away? They took it away because they found a better experience. Watching the changes in Safari that were showcased in Monday’s keynote, it finally clicked home in my head.

    From my point of view, gestures are the convergence point of iOS and Mac OS. Since the first release of the iPhone, bloggers and others have been wringing their hands about the iOS-ification of Mac OS. I have always thought they were missing something serious, “who will be writing and compiling Objective C on an iPad?”. I think that I, too, was missing a point. The convergence of these two things will be based on the design.

    “Most people make the mistake of thinking design is what it looks like. People think it’s this veneer — that the designers are handed this box and told, ‘Make it look good!’ That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works.” –Steve Jobs

    If design just gets out of the way, old metaphors like scroll wheels on mice just don’t cut it. Swipe, pinch, drag, grab gestures, they just make more sense. More and more of our computing experiences are moving to the browser. The easier the browser is to use, the better the experience you have with the computer, and past that, the Internet. I was shocked the first time I used two fingers to scroll on an Apple trackpad, or two fingers to right-click, or swipe to go forward or back. It just seemed right.

    I’ve always loved Apple for the experience, the little things. With my Macs, I’ve always had a gigabit Ethernet ports, wifi, DVD players, and great trackpads. Not good, great. I cannot recall on missing a tap. Or failing to scroll, or losing the cursor because its taken a mind of its own. In other laptops, all these features seem to be optional or add-ons, if they are available at all.

    PS to remember Part 1

    Reading through this link Raphael Mudge talks about using rogue applications called notepad.exe to call back out, and then drops this tidbit(emphasis mine):

    netstat -nab is a tool to help you discover rogue notepad.exe instances connecting to the internet

    the output of which looks something like this:

    I think to myself, this is a fantastic tool to use for troubleshooting, however, the default output is huge. I need to pare it down a bit. In Bash, I would just pipe to grep and be done. I’m very new at PowerShell, but it seems overly optimistic to thing it has grep.

    After a bit of searching, no, there’s no grep. However, somewhere in the StackExchange network there was a more appropriate solution. “Out-String” and “Select-String”. Mixing all that together gave me the following:

    netstat -nab | Out-String -Stream | Select-String -pattern “ESTABLISHED” -context 1

    So what does all that mean, exactly?

    netstat: “Displays protocol statistics and current TCP/IP network connections

      -n : shows addresses and ports as numerical infomation
      -a : all connections and ports
      -b : show executable involved

    Out-String: Sends objects as strings (pipes the output of netstat as strings instead of data)

      -stream : sends each string individually rather than concatenating to a single string

    Select-String: …You can use it like Grep in UNIX…

      -Pattern “” : inside the quotes goes what you’re looking to filter with
      -context # : this is the number of lines after your match that you want to return.

    So, as a baby step into PowerShell and learning how it is not Bash, this was fun. More of these to come as I get better at it.

    How I got here and where I’m going

    Last night I was catching up with an old friend, and in refreshing the last 24-36 months I told him what I had been up to. In hearing his story, it is striking how close it is to my own. He has a decent job, wonderful wife, and if the construction ever finishes, a lovely home. He told me he doesn’t dislike his job, but it feels like he’s not getting there quick enough. I told him about my trials with work, and how I got to where I am now.

    After college I had no idea what I wanted to do. I suffered from nearly terminal lack of motivation. I watched my friends move out to jobs and grad school, while I just stayed put, working in a Bob Evans. Eventually it was time to move, so I got a short-term job as a liquidation manager in New Jersey of all places. This was a few months of intense work, sales at that, which gave me enough money to move to Cleveland. Once I got to Cleveland, I still had no job, and very little professional motivation to follow my college degree career path. I did, however, have the motivation of rent. I did a little construction, building decks and installing siding for a few months, odds and ends contractor stuff as a laborer. This was nice through the summer, but wouldn’t work in the winter.

    I applied and got hired at CompUSA, to work in the warehouse. This was a blessing, because if there’s anything I do not like, it is trying to sell things to people. I made a few friends in the “Tech Shop”, where customers could bring their computers for repair or upgrade. This started to teach me both how much I already knew about troubleshooting and how much fun it would be to do that as a job. I started to see how being “into computers” could result in a paycheck. After about a year there, a friend said I should send my resume to his company, he would recommend me and they were a great place to work. I did, and was interviewed to do QA for their internal and external websites. The interview went great, but apparently shortly following it the manager who I interviewed with left that company. My application was left hanging as one of his open items, and it took me a few months of following up to get a second interview. This interview was even better than the first. I talked with a lead developer and the VP who was running the IT department temporarily. I was offered a job with no real description or title, but they said with my graphics experience I would be inbetween their IT department and digital print shop, not QA. I gladly accepted, this was my first full-time, for real job, with benefits, perks, salary, everything.

    I was in that role for about 3 years. Flux in the company bounced me around to 3 or 4 managers, a few different desks, and many, many projects. I learned a great deal about digital pre-press work, and how to configure the web and print graphics for their custom print-on-demand solution. The biggest thing I learned there was that I had no desire to pursue this any further, and that it was worth a gamble to get out of the print/graphics career field. After talking it over with my wife, we agreed that now was the time to gamble. I had experience enough to get another prepress job, but no interest in it.

    Finding a low-level IT job with no experience or certifications is pretty difficult. I applied to anything IT related that said “junior” or “entry-level”, with nearly no success. One company, an information security consultant firm, replied to my application with “you’re the second or third person with graphic design experience we’ve had apply, what makes you interested in this?” So I started a dialog with this person, who I later found out is the owner/lead consultant, about how unsatisfied I was with graphics and print, and my ever increasing interest in computers, networks, software, etc. We setup an interview and I went. After a little smalltalk, they got down to it and explained what they were expecting from the position, then provided examples of the work environment and the tasks that would be assigned. During this I only had the faintest notion of what they were talking about, and said so. I thanked them for their time, but told them I was woefully under-equipped for the position, no matter how interested I was. They respected this and gave me a few pointers to build up the skills and knowledge to get to that level. One of these was attending the local infosec group, NEOISF.

    I’ve been attending meetings ever since. I’d like to say i’ve been every month, but life gets in the way sometimes. The first few meetings I attended I felt like the speakers were using a different language. I typically got lost in the talks right after the “Hello, my name is…”. Taking notes, reading blogs and tech articles discussed in the talks, trying out some of the things demoed, they’ve all slowly built up my knowledge and skills.

    I had one other interview that went well, and resulted in a job offer as a “systems operator”. I optimistically thought this would be a path to a real systems administrator position. Sadly, this was not the case. The job amounted to a little bit of software and website QA, running a few reports, and monitoring the monitoring system so we could alert people if something broke. After about a week of this, I started looking for jobs again. Over the course of the next 18 months I tried to build myself up professionally. I got the A+ and Network+ to actually add IT things to my resume. Finally my constant applications paid off. I had two interviews that went great, one at a colocation facility, and another at the company I had done the graphics work. Both companies had a great offer. The colo said they support linux & windows customers of every different stripe, and that I would get a ton of hands on time with server administration, but it would be 3rd shift only for at least the first year. The other company offered me a spot on the IT admin team. They were expecting an acquisition to be completed soon, which would amplify the day to day work, and would be an excellent time to start my IT career. Between the normal schedule offered and my experience working for the company, I took the safe bet and went back.

    The next 18 months were fantastic. I worked on a team of people who gave me difficult, challenging projects almost every day. They were great to work with and I added an dozen lines to my resume, things like .NET website setup and migration, QA/Dev/Production environment configuration and maintenance, desktop support(Mac OS and Windows), SQL Server maintenance, version control migration, and much more. I didn’t know it at the time, but here’s where I became a sysadmin, the title I had been reaching for since I discovered it existed. Other events forced me to leave that job, unrelated to the team or the work. It was a sad day, and I still miss working with a team where everyone is challenged together. This environment taught me how to be self sufficient with new technologies and just how valuable another set of eyes at the crucial moment can be.

    In my current role, I’m straddling the QA and sysadmin roles at an enterprise software company. I spend a good bit of time administering a large virtual machine farm, creating/configuring/upgrading machines, monitoring the environment, and maintaining access. Other tasks are replicating customer environments to repeat problems for development and QA, so that we can verify the software gets fixed. QA tasks are pretty limited compared to the rest of the QA department. My team is responsible for a very small set of features, mostly authentication and database related, because we have access to create complicated test environments at will. The big perk of this job is professional development. Previous employers of mine were either not at all interested in this, or only superficially. Now it’s a full time item, they will supply budget and educational materials to support my goals.

    Now I’m looking at where I want to be. After working into the IT field and attending NEOISF meetings for roughly the same amount of time, it’s infosec, or Information Security. Bringing this up with my current manager met great enthusiasm, as building out an accountable security team is one of the company’s current goals. So now I have an environment to grow in, a company enthusiastically supporing my growth, and no experience. Oh, and I have the same workload as before, just with the added action item of “get better at security”. I’ve started attending conferences and asking for training, reading as much as I can get my hands on, and researching certifications that can be used as a milestone to show development. Outside of work I’ve built a test lab machine to house VMs for testing “red-team” attacks and analysis. Rather than watching TV or movies, I tend to spend my free time watching talks recorded at infosec conferences. And I started this blog to just add one more point of forcing myself to both do something new and keep track of it.

    A group of like-minded individuals in the QA department have started meeting to try and figure out both what kinds of things our software has been vulnerable to in the past, and discussing what it would take to find these sorts of problems going forward. I think our biggest problem is no one has any real experience with security.

    Does anyone know how to build a QA security program or team?

    Walk Away

    One of the rules of troubleshooting is never change more than one thing at a time. Given that I have effectively become a professional troubleshooter as a sysadmin, you’d think that I would be capable of remembering this, turns out, not so much.

    After spending the better part of 3 months acquiring, configuring, reconfiguring, and using my test lab ESXi machine, I decided it needs one last bit of reconfiguring. Since the purpose of this is to have a platform for testing exploits, it is a good idea to create a DMZ network to wall the virtual machines off from the rest of my home LAN. “This should be easy”, I told myself. Add a NIC to the router(an old Dell running PFSense) and one to the ESXi host(a less old Dell), connect the two and tell PFSense what to do with traffic.

    Turns out it really is just that easy. Once the link is active in PFSense, you just add the interface, rename from OPT1 to DMZ just to clean it up, and set the IP. Next, set a couple of simple firewall rules to allow any traffic from the DMZ interface to anywhere that is NOT the LAN interface, and any traffic from the internet to the DMZ interface. Then just turn on a DHCP server, and away you go.

    Away I go, almost. The link is up and physically active, blinky lights and all, but no DHCP. “How did you check this?” Good question, glad you asked. In the configuration of the ESXi host, there’s a network adapters section. Looking at this, the LAN interface showed the IP range that I had configured on the LAN interface DHCP server. I *assumed* the same thing would happen when I connected the DMZ link. “Didn’t you try to verify another way?” Yes, and here’s where I totally dropped the ball. I tried rebooting the router and the ESXi host, nothing changed, I tried reconfiguring the ESXi connection, I tried reconfiguring the DMZ interface on the router, nothing changed. I added the interface to a vSwitch, connected only that vSwitch to a VM, and tried to force its NIC to update, even rebooted the VM. “Didn’t you say you were a sysadmin? You couldn’t figure out networking?” I was in a hurry, so I logged into a VM I had never used before, thinking it would be just as good as another. I was wrong.

    In frustration, and knowing that I was already confused by something simple, I stopped, and came back the next night. For good measure, I rebooted both machines. I logged into a different VM, Backtrack. I’m comfortable with the OS at a commandline and GUI level. My assumption this time was, “it’s another day, before you change anything, just give it a shot”. TA-DA! Now it works. Connected immediately, could ping the gateway(DMZ interface) IP, could ping,, you name it. Internet connection live.

    So I changed configuration and tested with something I didn’t fully understand. This time it didn’t really cost me anything, because getting that interface working was the goal of the night. But it did serve as a reminder not to get cocky. I’m fairly comfortable troubleshooting simple networking problems, provided I’m using tools I am comfortable with. I’m also thankful it only took me 24 hours to find the solution.