Alien Barrage — Building an iOS Game with AI

Built with Swift, SpriteKit, and an AI-assisted workflow (Claude + Codex)

This week I released a new game on the App Store: Alien Barrage. It took about three months to build.

While AI handled the coding, the project still required a significant amount of work—planning, designing, testing, managing the AI workflow, and gathering feedback. I started with inspiration from classic arcade shooters, combining elements I liked from different games, adding my own ideas, and letting the gameplay evolve naturally. These are the kinds of games I grew up playing in arcades, so this project was a bit of a throwback.

Platform

I considered using Unity but ultimately chose Apple’s Sprite Kit. I have a bias toward Swift, iOS, and the Xcode environment, and I also wanted to get some experience integrating in-app purchases, as well as Game Center features like leader-boards and achievements. The game is also translated into 14 languages.

Development with AI

My development process relied on a custom workflow using both Claude and Codex. I would switch between them as needed—usually when one hit context limits, or started to drift and could not “get something right”. This approach turned out to have a couple of advantages: it kept costs reasonable, helped reduce the need for long sessions using the same context, and forced more structured planning.

I used AI to generate phase-based planning documents, sort of like an outline. Each phase became a focused unit of work: the AI would implement it, stop, and tell me what to test. Once verified, I would mark the phase complete and merge its corresponding Git branch.

This resulted in a clean development history with meaningful commit messages and a structured progression of features. No code was generated until the whole game was planned. Then the first version of a fully functional game was was made one phase at a time.

AI wasn’t just used for coding. It also played a role in asset generation, image and video processing, and sound integration. I used command-line tools like ImageMagick and ffmpeg (driven by AI) for asset workflows, along with ChatGPT for generating imagery.

AI Workflow (What Actually Worked)

  • Phase-Based Planning Broke development into clear phases using AI-generated outline documents. Each phase had a defined goal, scope, and completion criteria.
  • Model Switching (Claude ↔ Codex) Alternated between models when hitting context limits. This kept costs down and reduced “AI drift” by forcing re-grounding between phases.
  • One Phase = One Git Branch Each phase was developed in its own branch. After testing and validation, it was merged—keeping changes isolated and history clean.
  • AI-Driven Task Execution with Human Checkpoints AI would implement a phase, then stop and provide testing instructions. I validated before marking it complete.
  • Structured Commit History AI generated detailed commit messages, resulting in a readable and useful development timeline.
  • Tight Feedback Loop Frequent testing cycles after each phase prevented large-scale issues from accumulating.
  • Prompt Discipline Clear, scoped prompts reduced wandering behavior and kept outputs aligned with the intended feature.
  • AI Beyond Coding AI was also used for asset workflows (ImageMagick, ffmpeg controlled by AI), image generation (ChatGPT), video generation (Grok), and documentation generation (jazzy docs controlled by AI).
  • Role Separation Treated the setup as pair programming: I handled design, planning, project direction, and managing the AI workflow, while AI handled execution.

Me vs. Me + AI

Realistically, I wouldn’t have had the time to build this game on my own.

AI has made it possible to take on larger and more complex projects without getting bogged down in low-level implementation details. If you think of it as pair programming—“Arnold and the AI”—my role was design, planning, project direction, and managing the AI workflow, while AI handled execution.

That combination is effective.

There’s a lot of concern about what AI means for the future of programming, but my experience has been the opposite. Pairing real-world development experience with AI tools feels like a strong advantage.

Cross-Platform vs. Native Development

I originally leaned heavily into cross-platform development—starting with Xamarin around 2015, with years of Adobe AIR before that, then moving through React Native and .NET MAUI. The main advantage was always efficiency: one codebase, one skill set, two or more platforms.

But in the age of AI-assisted development, that tradeoff looks different.

Recently, I’ve been building native apps in Swift and Kotlin—even in areas where I wasn’t deeply experienced—and still producing complex, production-quality results with AI. Given that, native development has become far more appealing.

My AI Coding Journey

I started experimenting with AI coding tools in 2025, using Codex and later Claude.

Since then, I’ve:

  • Rebuilt my Xamarin-based iOS app TimesX in Swift
  • Added Apple Intelligence-powered content generation into TimesX for iOS devices that support it
  • Replaced the original Xamarin app on the App Store with the Swift Native
  • Built a supporting website
  • Created a native Android version (optimized for Chromebooks)
  • Developed a Swift/SpriteKit game Alien Barrage
  • Made a Website for Alien Barrage
  • Worked on several smaller projects, including Apple TV apps

In just a few months, I’ve been able to create a significant amount of code that would have taken much longer otherwise.

I’ll admit it—I’m hooked on vibe coding 😊

Advanced FFmpeg in plain English using Claude

FFmpeg is one of those tools everyone knows is powerful, but can be complicated to use. It can do almost anything with video, but the learning curve is steep, and the syntax is unforgiving. Even after years of using it, I still find myself searching for examples or reusing old commands.

Recently, I experimented with using Claude as a kind of “translator” between what I want to do in plain English and what FFmpeg actually needs. The result was surprisingly effective.

The Problem

I had a simple goal, at least conceptually:

  • Take a screen recording of my iOS app
  • Turn it into a square video for Instagram
  • Use a slow-moving 4K cloud video as a background
  • Speed up both videos
  • Center the app video with padding
  • Add a QR code in the bottom corner linking to the App Store
  • Output a single, Instagram-ready MP4

The Approach

Instead of building the FFmpeg command myself, I described the entire process in plain English to Claude and let it handle the mechanics:

  • Trim the background video to skip the black frames at the start
  • Resize it slightly larger than the app video to allow padding
  • Match its duration to the foreground video
  • Speed everything up 2×
  • Center the app video both vertically and horizontally
  • Overlay a QR code in the bottom-right corner with padding
  • Name the output file

What stood out immediately was that Claude didn’t just generate a command—it ran and verified the output. If multiple steps were needed, it handled them without me having to reason about intermediate files or filter chains.

The Result

Less than a minute later, I had exactly what I wanted:

  • A square video
  • Animated cloud background
  • App video perfectly centered
  • QR code placed cleanly with spacing
  • Ready to upload to Instagram

I previewed it in VLC, and everything matched the mental image I had when I wrote the prompt.

Why This Matters

I’ve tried doing this same task in traditional video editors like iMovie, and ironically, it was harder. Tools with visual timelines can struggle once you step outside their expected workflows.

What made this interesting wasn’t just that AI “saved time.” It removed friction from a task that usually discourages experimentation. I didn’t have to remember FFmpeg syntax or worry about getting one parameter wrong—I could focus entirely on the outcome.

This also wasn’t really “programming” in the traditional sense. It was intent-driven tooling: describing a result and letting the system figure out the steps.

Takeaway

If you already know what FFmpeg can do but avoid it because of complexity, pairing it with an AI assistant like Claude is a game changer. It lowers the barrier without limiting capability—and it encourages you to try things you might otherwise skip.

Hopefully this opens up a few ideas for how you might use AI tools in your own workflows, even outside of coding.

TimesX 2026, now with AI

What’s New?

A decade after TimesX was first released, the 2026 version receives a full rewrite in native Swift, along with a major new feature: AI-generated word questions.

There is a clear industry trend toward empowering handheld devices with artificial intelligence, visible across personal computers, phones, and wearables such as Meta glasses. Apple began including NPUs (Neural Processing Units) in its chips starting with the M1 (Macs and iPads) and later the A17 Pro (iPhone 15 Pro). This enabled new AI capabilities on iOS devices—such as face detection and image classification—but also introduced support for an on-device Large Language Model (LLM ), similar in concept to ChatGPT.

How Does This Affect TimesX?

Since its creation, the app supported only two question types: Multiple Choice and Type the Answer. With the 2026 rewrite, a third question type—Word Questions—has been added.

This rewrite made it easier to access Apple’s on-device LLM directly in code. On supported hardware, TimesX can now generate fresh word questions for every quiz using Apple Intelligence. An important benefit for security-conscious parents is that the AI runs entirely on-device and does not require an internet connection. Once installed, TimesX can operate completely offline.

What About Devices Without Apple Intelligence?

For devices that do not support Apple Intelligence, TimesX includes a pre-generated bank of word questions. The AI feature can also be disabled in Settings, in which case the app will always use the question bank instead.

What Else Is New?

Dozens of refinements have been made across layout, imagery, and usability. Some of the most impactful improvements are on the Error Counts screen.

Imagine a child using TimesX to practice multiplication tables across dozens of short tests each day. The app tracks questions that have been answered incorrectly at least twice and surfaces them on this screen. The update adds visibility into how many times each question has also been answered correctly.

When a child starts a Test from the Error Counts screen, the quiz is built entirely from these problem areas. Over time, as accuracy improves, a happy face appears next to questions that have been answered correctly more often than incorrectly—clear feedback that focused practice is paying off.

Conclusion

If you—or someone you know—has a child in elementary school where multiplication tables are part of the curriculum, TimesX offers a more focused and adaptive practice experience than traditional methods or most existing apps.

More detail on the website:

Generate Subtitles for Your Videos Free with AI

The audio in this video contains several languages and subtitles were generated using the process described in this post.

I recently watched a movie on Netflix with scenes in multiple languages: English, Korean, French, and Italian. During the foreign language scenes, there was no translation, just the name of the language spoken, like “[Korean]”. How disappointing…

In a nerdy fit of revenge I decided to fix this myself. So, I obtained an .mp4 video file of the movie and went to work. The tech I’m about to describe uses AI to listen to your movie’s audio, translate it from almost any language, and create subtitles. You could also use these tools for other tasks such as generating lyrics for music.

The tools involved are a combination of ffmpeg and mlx-whisper – a version of OpenAI’s Whisper model optimized to take advantage of Apple Silicon chips. The hour and a half movie I mentioned took less than 5 minutes to generate subtitles on my Apple M2 Max Macbook Pro with 32Gb of memory. I asked ChatGPT what makes mlx-whisper faster on Apple Silicon chips and this is what it said:

What you’ll need

  • A modern Mac using Apple Silicon
  • The Terminal app

This is how you get ffmpeg and mlx-whisper on your Mac.

  1. Brew
    • https://brew.sh/
    • On the web page, you can copy the install command for your terminal
      • /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    • Follow the resulting instructions displayed in the terminal to make brew into a command. These were mine, specific to my user name on the machine. Copy yours from the terminal
      • echo >> /Users/bubba/.bash_profile
      • echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> /Users/bubba/.bash_profile
      • eval "$(/opt/homebrew/bin/brew shellenv)"
    • Give it a quick test- type brew and hit return to see if works
  2. FFMpeg
    • brew install ffmpeg
    • Give it a quick test by typing ffmpeg and hit return
  3. Python
  4. Pip
    • Download the script from https://bootstrap.pypa.io/get-pip.py into a folder you can run the terminal from. You can also right-click the link and save it
    • python3 get-pip.py
    • Give it a quick test by typing pip and hit return
  5. MLX-Whisper
    • pip install mlx-whisper
    • Give it a quick test by typing mlx_whisper and hit return
  6. LLM – a 3 Gigabyte Large Language Model
    • pip install huggingface_hub hf_transfer
    • export HF_HUB_ENABLE_HF_TRANSFER=1
    • huggingface-cli download --local-dir whisper-large-v3-mlx mlx-community/whisper-large-v3-mlx
    • use a folder where the video will reside

Now that that ffmpeg and mlx_whisper are installed, along with the LLM, lets assume you have a video to subtitle, called input.mp4.

To create an external subtitle file in the .srt format:

mlx_whisper input.mp4 --task translate  --model whisper-large-v3-mlx --output-format srt --verbose False   --condition-on-previous-text False

You can open the .srt file with a text editor and take a look, as well as make manual edits if desired. Now, you can either overlay the subtitle into the video, or add it as a track, so you could turn it on/off when viewing the video.

To overlay the subtitle into the video:

ffmpeg -i input.mp4 -vf subtitles=input.srt -c:a copy output.mp4

To add the subtitle as an optional track instead:

ffmpeg -i input.mp4 -i input.srt -c copy -c:s mov_text output.mp4

Now, suppose you wanted to do this to a folder of .mp4 files. You could loop through them with a shell script. I created this one and it worked for me:

#!/bin/bash

# Loop through all .mp4, .mkv, and .m4v files in the current directory
for video in *.mp4 *.mkv *.m4v; do
  # Skip if no matching files are found
  [[ -e "$video" ]] || continue

  # Extract the file extension and base name
  ext="${video##*.}"
  base="${video%.*}"
  subtitle="${base}.srt"

  echo "Subtitling: $video"
  mlx_whisper "$video" --task translate --model whisper-large-v3-mlx --output-format srt --verbose False --condition-on-previous-text False
  sleep 3

  # Check if the matching .srt file exists
  if [[ -f "$subtitle" ]]; then
    output="${base}_subtitled.${ext}"
    echo "Creating video: $output"
    echo " from subtitle: $subtitle"
    ffmpeg -i "$video" -i "$subtitle" -c copy -c:s mov_text "$output"
  else
    echo "Subtitle not found for $video"
  fi
done

Because my media player can play .mp4, .mkv, and .m4v files, and they all work with these commands, I also added those formats into the loop.

TimesX – Released on the App Store

This past weekend was dedicated to getting my first app store submission in order. I was overjoyed that it was accepted on the first review!

Music: Pacific Sun by Nicolai Heidlas

Originally made as a learning tool  for my son who was always on his iPad, I wanted him to get more multiplication practice for grade 2. I downloaded several apps which were entertaining and gave him practice, but I wanted more. I wanted to see if he was making progress, which questions he was getting wrong the most often, and how long it took him to do a test today vs last week. So, being a programmer, I made my own app for his iPad.

Features

  • Tests are saved on your device with letter grade, percent score, test time and more
  • With more use, ‘Error Counts’ shows you where your child needs help
  • Choose which times tables will be on the test
  • Limit for selected tables x10 for younger students or x12 for older
  • Live timer display optional

Programming

At first, I made the app in Xamarin Forms. This was going to be the quick and easy way to put it together for him to start using it. Also later,  if I decided to port it, it would be 95% ready for Android as well as iOS. When I started transitioning the app from “my son’s learning tool” to “TimesX”, I ran into issues displaying it on ALL iOS devices from the same layout. That’s when I decided to leave “forms” and move on to Xamarin iOS. The constraints system implemented by Apple for XCode was made for this, and Xamarin carried that through in it’s iOS implementation. A bit of a learning curve but it’s second nature now – constraints were a great solution that allows this app to display on every iOS device from an iPhone 4s to an iPad Pro 12″, in both Portrait and Landscape.

Facebook Promo Page: https://www.facebook.com/timesxmultiplicationtester

Digital Privacy, Security, and How I’m Safer on the Internet Now.

Earlier this year I watched the documentary “Citizen Four” about Edward Snowden’s revelations on government spying. Unlike any other documentary I’ve seen, this one had me on the edge of my seat, feeling tense, shocked, and violated all at the same time. Though I have no illegal activities to hide, I can not be comfortable with the level of access the spying agencies have to our computers, cellphones, and other connected devices. Also, the increase of private spying (hacking) is so rampant, it seemed that protection from agency spying might also be increased protection from hacking.

I decided to step up my game and see if I could maintain privacy in this Orwellian environment. A month of research later, I came up with a solution – VPN. A VPN, or Virtual Private Network – encrypts all your traffic between your device and the VPN server. Here is an example scenario of a regular connection vs a VPN connection:

Scenario: Regular Connection VS VPN

-You and I are at a mom and pop ice cream store
-We are both on our iPhones using the FREE WiFi
-mom and pop have a son with ambitions to be the next “Mr Robot” super hacker
-son set up the FREE WiFi network we are using
-son has taught himself enough Linux and Network Administrator skills to Port Scan, Traffic Sniff, and see everything you are doing on his network (urls, IMs, emails, and more)
-son can NOT see what I’m doing. All he can see is encrypted chunks of data going back and forth to one location, which he can not decrypt

This is VPN. All my data – including URLS, requests, responses, etc – flow through a server in an encrypted connection. When I visit a web page, the url is part of the encrypted data between me and the VPN server, which handles the request to that web page. “Son”, or anyone between me and the VPN (like my internet provider) can not see what pages I visit or what is in my data.

Not All VPN’s are Safe

So what if the VPN provider decides to spy on me? Or logs all my traffic and uploads it to the N.S.A. ? This was something I dug deeper into in my research. My wish list for a VPN service provider evolved into this:

– no logging of my data
– good encryption
– good performance ( bandwidth )
– useable on my computers, phones, tablets (and all at the same time)
– decent price
– good reputation for privacy and reliability

I had VPN connections before with my work, but real privacy is something you have to pay for. On a company VPN, the company can still see your unencrypted traffic, because they operate the VPN server.

Speed

My final choice was Private Internet Access ( which I’ll refer to as PIA ). I have it set up on my Mac, PC, iPhone, Android, and Linux box. You can see PIA’s supported clients here. When not using VPN, I can download up to 12 Megabytes per second on my 100 Megabit connection. On the VPN I’ve reached up to 4 Megabytes per second, but typically cap around 2. These are good speeds considering that the VPN provider has to service many other individuals simultaneously. This is also more than fast enough for YouTube and other video streaming.


Geolocation
I have a choice of servers all over the country and all over the world. This gives me better connections where ever I am, but also allows me to “be” in other places when I need to be. For example, when in Canada, certain web sites re-direct you to the Canadian .ca versions of the page. Your IP location is used for redirection behind the scenes. On PIA, I simply connect to a US VPN server and the problem is solved. This could also hold true for people in countries with censorship and other restrictions, as the agencies blocking certain URLs and IP addresses would never see them in the encrypted VPN traffic.

Hacker Proof?
Other than being digitally safe in the ice cream shop, and being able to spoof my location, VPN has other advantages. My true IP address is never revealed when I surf the internet on VPN. If a hacker was trying to get to my computer via internet, my IP address would appear as one of PIA’s VPN servers. Getting back to my computer via internet should technically be impossible. Although there may be other ways hackers can get to your computer, blocking the passage through the internet is a big step toward safety.

iOS Map App Tutorial in C# using Xamarin

Preface

Xamarin is a powerful development environment for creating apps for multiple platforms, using the same C# code base. The new Xamarin Forms technology will even allow you to use most of your UI code between different mobile platforms. Just have to say I LOVE this technology because I can make Android & iOS apps from the same code, rather than using 2 or 3 other languages.

Onto our app.. This example is meant to be simple and quick, and give you a taste of Xamarin development. We are going to make an app that shows a map, and then zooms into your current location when you click a button.

Prerequisites

You can deploy this app to the iOS Simulator or a real iOS device if you have that set up on your Mac already. Oh yeah, you’ll need a Mac, otherwise you’d have to do this somewhat differently on a PC using Visual Studio with Xamarin plugins. The app may be too big to deploy using the Xamarin Starter edition on its own. If you try to publish for iOS and get messages about the app size, take the option to start a free trial.

Create an iPhone App
    1. First thing we want to do is create an iPhone app project in Xamarin. Choose File > New Solution
    2. Give this solution the Name Locator. Xamarin will generate a project that should look like this:

Visually Build the Screen
    1. Next, we visually build our app screen with some standard iOS components. To do that, go to the Solution pane on the left and double click the file MainStoryboard.storyboard. You should get a blank storyboard like this:

    1. Note the Toolbox and Properties panes. We are going to drag components from the Toolbox onto the storyboard, size and position them, and then customize them in the Properties. Drag these components from the Toolbox onto the storyboard so it looks like the image below:a) Label

      b) Button

      c) Map Kit View

Customize Components and Add Hooks for the Code
    1. To resize components, just grab an edge and drag. Let’s start with the Label first. Click on it and then go to the Properties pane in the Widgets. Change the Text property from “Label” to “Locator”
    2. Next, lets give the Map Kit View a hook so we can access it in code. Click on it and set the Name property to “myMap”.
    3. Finally, click on the Button and change the Title text to “Find Me”
    4. Set the Name property to “findMeButton”
    5. To detect the user pressing the button, we could set up the Events tab and write functions or we could let Xamarin generate this code for us. Double-Click the Button on the storyboard and Xamarin should switch you to the LocatorViewController.cs tab, where you’ll see this yellow code hint:

    1. Press Enter and Xamarin should generate this code:
partial void findMeButton_TouchUpInside (UIButton sender)
{
    throw new NotImplementedException ();
}
Start Coding
    1. Lets save our work at this time using File>Save All
    2. Now, lets replace the code in the findMeButton_TouchUpInside function. Add the line:
partial void findMeButton_TouchUpInside (UIButton sender)
{
    MKCoordinateRegion region;
}
    1. Notice the variable type is red, which means that the class you are editing doesn’t know what a MKCoordinateRegion is. To fix, right click on it, and choose Resolve>Using MonoTouch.MapKit. If you scroll to the top of the class, you’ll see the MapKit class was imported. Now lets enter the rest of the code. The function should look like this:
partial void findMeButton_TouchUpInside (UIButton sender)
{
    MKCoordinateRegion region;
    MKCoordinateSpan span;
    region.Center=myMap.UserLocation.Coordinate;
    span.LatitudeDelta=0.005;
    span.LongitudeDelta=0.005;
    region.Span=span;
    myMap.SetRegion( region, true );
}
    1. You’ll notice as you type that Xamarin is suggesting code for you. This is similar to the IntelliSense feature in Visual Studio on the PC, and also several other programming IDEs. Next we need to add one more line of code to a different function. Scroll up to the ViewDidLoad function, and add the myMap line:
public override void ViewDidLoad ()
{
    base.ViewDidLoad ();
    // Perform any additional setup after loading the view
    myMap.ShowsUserLocation=true;
}
    1. Save your work and lets try this app out. Assuming you have the iOS simulator installed on your system, or have published to your iPhone before, press the Debug Button

download source

Test and Debug

Since the iOS simulator does not have a real location service like your iPhone, it will simulate one. You can change the current simulated location in the iOS Simulator using Debug > Location > Custom Location and setting the longitude and latitude. You can also simulate a moving location as demonstrated in this video:

Update for iOS 8

Just finished this post a week before iOS 8 officially rolled out and the update caused the application to break. I started getting this error message:
Trying to start MapKit location updates without prompting for location authorization. Must call -[CLLocationManager requestWhenInUseAuthorization] or -[CLLocationManager requestAlwaysAuthorization] first.

A few extra steps are required to make this work in iOS 8 now:

Edit the Property List File (plist)
    1. In the Solution pane on the left, look for the file Info.plist.
    2. Double click it to open the tab for it
    3. At the bottom of the window, click on the Source tab
    4. Click on the green plus symbol to add a new entry
    5. Change the text Custom Property to NSLocationWhenInUseUsageDescription
    6. Click on the Value field for this entry and enter a message to prompt the user for location access such as Please allow this app to access your location.
Add some C#
    1. Add this locationManager variable under the class definition. If the CCLocationManager is red, right click and choose Resolve>Using MonoTouch.CoreLocation
public partial class LocatorViewController : UIViewController
{
	CLLocationManager locationManager;

    1. Update the ViewDidLoad function to look like this:
public override void ViewDidLoad ()
{
	base.ViewDidLoad ();
	// Perform any additional setup after loading the view, typically from a nib.
	locationManager = new CLLocationManager();
	locationManager.RequestWhenInUseAuthorization();
	myMap.ShowsUserLocation=true;
}

Again, this last section is only for iOS 8, so you would not need to do this for iOS 7.

Video List Using Angular JS

Something I’ve been meaning to do is add a video gallery or list of videos contained in this blog. Rather than install a WordPress plugin, I decided to build one on my own using the popular JavaScript library, Angular JS, along with HTML, CSS, JavaScript, and a Lightbox library for showing videos. What I like about Angular for my Video List is that it has some functionality I would normally use PHP for, like dynamically drawing rows for each record of data. In this code screen shot below, the top half is the header row that contains the list filter and sortable title, while the bottom half is where all the other rows repeat for the data in the model.

<!-- this row has the filter input text and sortable title -->
<tr>

    <td>filter: <input type="text"  ng-model="searchText" /> </td>
    <td>blog link  <a href="" ng-click="predicate = 'title'; 
    reverse=false">(a-z)</a>
        <a href="" ng-click="predicate = '-title'; reverse=false">(z-a)</a></td>
    <td>dimensions width x height</td>
    <td>video link</td>
</tr>
<!-- this is where the rows repeat according to the data -->

<tr  ng-repeat="vid in video_list | filter:searchText | orderBy:predicate:reverse">

    <td><img src="{{video_thumb_prefix+vid.url_thumb}}"/> </td>
    <td><a href="{{vid.blog}}"
    target="blog"><h1>{{vid.title}}</h1></a></td>
    <td>{{vid.vid_width + ' x ' +vid.vid_height}}</td>
    <td>
        <a href="{{video_media_prefix+vid.url_media}}"
        rel="shadowbox;width={{vid.vid_width}};height={{vid.vid_height}}"
        title="{{vid.title}}">{{vid.url_media}}</a>
    </td>
</tr>

Notice the ng-repeat – thats the magic right there for looping through data. The other great feature is that the data is live. As you enter text in the filter, the rows render to match what you typed. Using Angular instead of a server side language enabled me to host it on my CDN as a static set of files (vs dynamic). It’s all .html, .css, .js, and .mp4 videos – no PHP. Download the source to see how it all ties together with the data:

 
   $scope.video_media_prefix="https://d3od4vl78dd97d.cloudfront.net/blogvideo/";
    $scope.video_thumb_prefix="https://d3od4vl78dd97d.cloudfront.net/blogpreviews/";
    $scope.predicate = '-title';
    $scope.video_list=[

        {
            id:0,
            url_media:'tron1.mp4',
            url_thumb:'tron1.jpg',
            title:'Tron',
            vid_width:640,
            vid_height:480,
            blog:'http://www.arnoldbiffna.com/2014/05/20/tron/'
        },
        {
            id:1,
            url_media:'radionbt.mp4',
            url_thumb:'radionbt.jpg',
            title:'Radio Disney Next Best Thing',
            vid_width:508,
            vid_height:384,
            blog:'http://www.arnoldbiffna.com/2014/05/21/radio-disney-n-b-t/'
        },
        {
            id:2,
            url_media:'jonasbrothers.mp4',
            url_thumb:'jonasbrothers.jpg',
            title:'The Jonas Brothers',
            vid_width:511,
            vid_height:322,
            blog:'http://www.arnoldbiffna.com/2014/05/21/jonas-brothers/'
        },
        {
            id:3,
            url_media:'pepsirefresh.mp4',
            url_thumb:'pepsirefresh.jpg',
            title:'Pepsi Super Bowl',
            vid_width:509,
            vid_height:354,
            blog:'http://www.arnoldbiffna.com/2014/05/20/pepsi-super-bowl/'
        },
        {
            id:4,
            url_media:'pandajamad.mp4',
            url_thumb:'pandajamad.jpg',
            title:'Panda Jam Ads',
            vid_width:640,
            vid_height:480,
            blog:'http://www.arnoldbiffna.com/2014/05/14/panda-jam-ads/'
        }

    ];


});

MookieData C#

 

In 2012 when I was trying to make my game, SlotFriendzy profitable, I felt the need for an Admin tool to analyze the game’s usage and performance. I didn’t want it to be hosted on the web and decided to build it as a Windows Desktop App. I chose Visual Studio and built it on the .NET 4.5 Framework, along with a MySQL C# Connector I downloaded from the MySQL web site.

For security reasons, I normally would not put a database connector directly in a client-side application, but in this case, the intended audience was very a specific population – me. Designing a Form based app in Visual Studio was easy, and this simple layout made it easy to add commands every time I thought of something I wanted to query on a regular basis.

I named it MookieData because the game belonged to my company, Mookie Games Inc.

Form Creator

 


 
In late 2013, I started working at Deluxe Entertainment in Burbank and was debugging several versions of a LAMP stack application with Linux, Apache, MySQL, PHP as well as AMFPHP and Flex. The project was Media Recall, a digital media storage and cataloging system for several major clients including Harpo (Oprah Winfrey), Johnny Carson, Martha Stewart Online, and more. Later on I moved into tools development for the Operations team and created this application. Several of the tools being developed by the team had an XML format for the forms, and creating that XML was becoming a large task. My project was to simplify that by creating a visual tool to create and modify the XML. This tool was also a LAMP stack application, in which my focus was mainly the Flex and PHP.