Category Archives: Coding

Everything on coding, algorithms etc.

having 100% code coverage doesn’t solve all your problems, having less solves even less.

I recently came across a post about achieving 100% code coverage and how reassuring it is. But is it? 

What does 100% code coverage mean? Ignoring the different definition it basically says that all your code was executed ( covered ) during the tests. (the definition of “all your code” is a bit flexible here) But does that say anything about your code quality or about the quality of your tests? 

The answer is no. And it’s a big NO because having a full coverage doesn’t mean your code does what it should. Remove all the assertions from your tests and you still have 100% coverage, but with no quality assurance whatsoever. The only real advantage of 100% coverage is that you know there is no unexpected exception thrown (unless you did something real nasty like @expected(exception) in your tests, but then you had it coming). At least not when executed the way you did.  (yes 100% coverage doesn’t mean you covered all possible cases, just all code). 

On the other hand, having less than 100% means you missed some code and therefore definitly don’t know how it behaves in production. 

Long story short: 100% code coverage is a good thing to have, but it’s like being able to type fast: good to have but when it comes to quality coding you need much more than that. 

Fitting Plane to Point Cloud – Hands on

In my last post i described how to get the equations for fitting a plane to a given pointcloud. Now lets put them into some sample code. I will use the same notation as in the previous post.

When i first started with the optimization, i thought about distinguishing the different cases by Txx != 0. I already worked out the math for the case Txx == 0 etc. And then i started testing with real world data. Random points which are approximatly on a plane. The sad part is: Even with a pointcloud where Txx should be approximatly zero it wasnt close enough to make a valid decision. (And yes i know that comparing a float with 0 is not the way you do it) So sometimes the algorithm ended in the Txx != 0 case, setting c = 1 where c should actually be 0, but because of the unequally distributed values, it wasnt… Continue reading Fitting Plane to Point Cloud – Hands on

3D linear regression: Fitting planes on Pointclouds

Time for a math-post.

Linear regression, or fitting a line on a list of 2D points, is quite common with lots of code samples. But once you want to go further torwards more dimensions, there are no examples whatsoever… You find the wikipedia article about the theory, but eveything is in general terms. Its about calculating the invers of a matrix and lots of stuff that you dont want to do when thinking about performance. Continue reading 3D linear regression: Fitting planes on Pointclouds

ICP with SVD – Hands on

Hi everyone,

since you came across a post with a cryptic title like that, chances are high that you are already deep in the topic. But for the 1% who dont, here is a short background info:

3D-Scan Matching using Iterative Closest Points (ICP)

When you have a 3D-Scanner (like a Laserscanner, Kinect or whatever) and try to scan a room/object the first problem is how to combine to scan-frames to one pointcloud. Usually ┬áthe scanner moved between the frames still providing some overlapping to be matched. But how to do that? Continue reading ICP with SVD – Hands on

Integrating Unity into a Native iOS App – Example

After describing the Idea of adding Unity into a native App here, i got a lot of questions about some sample source code. So here it is ­čśë

For a minimum working sample of how to integrate Unity, you need a Unity project of course. Best way to start is from the iOS Export of the unity project. Personally i prefer to not change any of the Unity-generated files itself, but only integrate them, so i dont have to care about these changes everytime i do a new unity export.

After exporting from Unity you get a “normal” XCode project containing a “Classes” and a “Libraries” folder. You should be able to run it on a device, otherwise there is some problem with your Unity-project which need to be fixed before integrating it. Continue reading Integrating Unity into a Native iOS App – Example

Unity 5 with il2cpp into iOS

Integrating Unity as a Subview into a native iOS-App went from being a pain in the ass to being kinda smooth while Unity evolved from 3 to 4. But when you needed to pass data(and i mean more than serialized stuff via sendMessage) into the view the whole Mono-stuff was still… lets say inconvenient. But fear not, Unity 5 is here for the rescue!

Stepping from 4 to 5, Unity decided to drop Mono and use il2cpp instead.┬áSince we went through a lot with mono and it was always in a state of “i hope this isnt blowing off” we surely took a look into the new and fancy stuff. Continue reading Unity 5 with il2cpp into iOS

Keyframeanimation in iOS 7

With iOS 7 Apple added a nice feature for animating views:

UIView animateKeyframesWithDuration: delay: options: animations: completion:

paired with

UIView addKeyframeWithRelativeStartTime: relativeDuration: animations:

With these handy methods you can do nice keyframe animation without the code overhead of nested animations. You set up the overall duration of the animation with the option you already know from the “old” animation. Then within the animation-block you add the keyframes with relative starttime and duration.

Not as you know keyframes

Now this is a bit against the “keyframe idea”. Its not like traditional keyframe-animation where a keyframe describes the animated objects at that particular frame. Its more like a nested animation within the overall timeline with relative startTime and duration. Unfortunately not with its own options or complete block.

The missing completionblock is a bit of a problem. You can animate everything that was animateable before, but if you want to f.e. change the image of an UIImageView at some point within the animation, you cant. The image property is not animateable and therefore if you set the image to a different value within the animationblock, its set to that value at the beginning of the animation.

Of course you can work around that with an additional UIImageView and playing with the alpha within the animation or go back to nested/sequential animations, but in the end a keyframe-animation should be able to change even non-animateable things at keyframes.

Sample

For coders like me who search for working samples, here is a small snippet which fades out one view and simultanously (but faster) reduces the size to zero. And to make things more fancy, after 50% is done, another view appears on top and fades in with another size-effect.

UIView* view1, *view2;
view2.alpha= 0; 
CGRect targetFrame= view1.frame;
[UIView animateKeyframesWithDuration:DURATION 
                               delay:0 
                             options:UIViewKeyframeAnimationOptionAllowUserInteraction 
                          animations:^{  
  [UIViewaddKeyframeWithRelativeStartTime:0 relativeDuration:1 animations:^{
    view1.alpha= 0; 
  }];
  [UIViewaddKeyframeWithRelativeStartTime:0 relativeDuration:0.8 animations:^{
    view1.frame= CGRectZero; 
  }];
  [UIViewaddKeyframeWithRelativeStartTime:0.5 relativeDuration:1 animations:^{
    view2.alpha= 1;
    view2.frame= targetFrame; 
  }];
} completion:^(BOOL finished) {
//completion block after whole animation is done
}];

Integrating Unity into a native iOS App

This Post describes the integration for Unity 4. If you are working with Unity 5 (which you should ­čśë look at this post

For a full working sample with Unity 5 including source code look at this post

With Unity, its really easy to create and deploy stunning 3D-Apps. But what if you have a native iOS app, that is implemented in Objective-C, and only want to add unity-content only as a view?

Lucky us: With the new version of Unity, this is easier than ever.

The easiest way is to start your native app from the unity-generated XCode Project. Dont worry, its really easy to update the unity stuff within the app if it changes after the first export (which usually happens alot). This way round you dont have to set all the necessary build settings needed for unity on your own.

First thing you wanna do is change the Debug Format in the build settings from “DWARF with dSym” to “DWARF”. Otherwise you waste 30 seconds+ each build. Just keep in mind that you changed it if you need to change it back while debugging later.

Since everyone should use ARC nowadays, activate that in the buildsettings too. Make sure you add -fno-objc-arc to all .m and .mm files generated by unity (which are all of them at this state in the project).

Now create your own AppDelegate class by subclassing the Unity Version which is called UnityAppController. Your AppDelegate must be a .mm file. Generally the UnityAppController  takes care of everything related to the app-lifecycle so you wont need to implement that stuff. The only important stuff is to add

IMPL_APP_CONTROLLER_SUBCLASS(AppDelegate)

at the beginning of AppDelegate.mm. (FYI this macro is defined in  UnityAppController.h)

When launching the application (e.g. in applicationDidFinishLaunchingWithOptions) Unity creates an UIWindow and presents a splashscreen. Then it starts the initialization of the real unity-engine asynchronosly (with delay 0 on the mainthread). This means that if you want to do anything AFTER the unityview is loaded (f.e. put it as a subview into your own application) you dont want to put code into applicationDidFinishLaunchingWithOptions. You better override startUnity:

-(void) startUnity:(UIApplication*) application {
[super startUnity:application];
//anything that should be done AFTER the unity viewhierachy is loaded. like replacing the rootviewcontroller with your own.
}

Because thats the method called asynchronously. So in [super startUnity:] the UnityAppController  creates the Unityview and ViewController. Then the Unity-ViewController is set as rootviewcontroller in the window and the unityview added as subview to the window. Therefore it doesnt help if you change these things before startUnity is done.

If you add the Unity-Viewcontroller as subviewcontroller within your own Hierachy you only have to set the transform of its view back to the identity. Unity adds the view directly as subview of the window and therefore sets the transform to reflect the rotation of the app.

If you want to change the size of the unityview (Unity always asumes to be shown fullscreen) you need UnityRequestRenderingResolution. This function (yes, a c-style function) is defined in UnityInterface.h Just call it with the new dimensions, resize the view and call setNeedsLayout on it.

Once you added the Unityviewcontroller (received by either taking the rootviewController from UnityAppController) as a child to your own UIViewController, you can easily add any native interface on top of unity. And with UnityRequestRenderingResolution you can furthermore use the unityview as a non-fullscreen subview within your layout.

If you have any further questions or problems, just leave a comment.