I am going take a stab here and say no you can't create a "patch" on the AmpliFi and then port it over to HD500. One reason is that the algorithms for creating the HD500 effects and amps are of a better design than the X3 which appears what the AmpliFI is using. So there would be many amps and effects on the AmpliFI that would not be available on the HD500.
I think I may need to clarify. I don't mean create it in the app, export it to file, import in POD HD500 Edit or Monkey and have it appear.
I mean, could the app feasibly use it's algorithm(s) to pin down it's interpretation of a tone (as demonstrated in the Anderton's video with the AC/DC grab)? It lays out an effects chain and lists the amps used (and maybe even microphone models). Presumably they're nearby to the effects/amps used in the sample it maths up and arranged in the proper order with the closest equivalent. It'd probably need some further tweaking to get even closer to the actual rig, but that's not the point. And again, I understand there'll be far more options on the AMPLIFi than on the POD, so some leaps will have to be made.
Could you then use this information (in an analog manner) and replicate it using the POD HD500 software? Like, he uses these amps, this distortion, this delay, this phaser, this etc. Maybe the amp model on the AMPLIFi says it's a JCM-900, but the closest option on the POD is a JCM-800. AMPLIFi says it's the Line 6 Distortion, so you go with that.
Maybe I'm misunderstanding the capabilities, but presumably it includes the levels as well. Take all the guesswork I'm terrible at out of it. Using the derived tone chain as a guide on the POD.
plus the amplifi app will not process sound.. you'd need the amplifi hardware to even audition what you had created.
That was my other question. Would anything happen without the hardware. It's all bluetooth, so it would likely only work with paired devices (vs. an iRig input, just for the app). I'm just saying is there a way to let the app do the thinking/heavy lifting for me.