The Cylindrical Parallax Platformer Mac OS

Written by Paul Bourke
May 2002

Updated October 2003 to handle spherical and very high resolution maps.
Updated April 2004 to support stereoscopic cubic and planar maps.

See also: Synthetic stereoscopic panoramic images
Lecture Notes in Computer Science (LNCS), Springer, ISBm 978-3-540-46304-7, Volume 4270/2006, pp 147-155

  1. The Cylindrical Parallax Platformer Mac Os X
  2. The Cylindrical Parallax Platformer Mac Os Catalina

Parallels Desktop for Mac Pro Edition is the easiest and most powerful solution for developers, power users, and other demanding pros looking to increase productivity. Enhanced memory (64 GB RAM) and processing power (16 vCPU) for improved performance. 1-year subscription for one Mac. Subscription can be canceled at any time. Rock the L-Stick while on the menu to see some wacky parallax effects. KEYBOARD/MOUSE: Click on PLAY button to start, SETTINGS menu to go to Settings, CROSS to close. Up-Down to scroll through Settings, ENTER to select, click on SETTINGS to go back to the main menu. Rock the arrow keys while on the menu to see some wacky parallax effects. Installation on Mac OS X. For Mac OS X, it's recommended to run the flexprop.tcl script from a command line. Pre-built binaries of the command line tools like flexspin and loadp2 are provided. You may get a Gatekeeper warning about the binaries; if so you'll have to tell. SimpleIDE is a multi-platform, open source, internationalized code development environment for the multicore Propeller microcontroller. SimpleIDE supports the C, C, Spin and Propeller Assembly (PASM). The mountain - A platformer by Scratchahiya; Ruins of the lost - a parallax platformer #Games by -CodingIt-History of Mac OS X by rickscratch01; dress-up by catslove101; Mountain Climber - A Scrolling Platformer by KIKOKO Riddle - a Platformer with mind-twisting riddles by PackersRuleGoPack; only IOS SCRATCH HUB IS OUT! By PackersRuleGoPack.


There are three frequently used techniques for rapidly displayingeither photographic or computer generated surround environments, they are: cylindricalpanoramics, spherical maps, or cubic maps. In the late 80s panoramic and spherical maps were popularised byApple with their QuickTime VR software, more recently (2000) that technologywas extended to handle cubic maps.In all three cases imagesare mapped onto some geometry (cylinder, sphere, cube) with a virtualcamera located in the center, depending on the performance of thehost hardware and software the user can interactively look in anydirection. This can lead to a strong sense of immersion especiallyif the environment is projected onto a wide display that fills upa significant part of the viewers field of view. One might ask howa greater sense of immersion can be achieved and in particular whetherstereoscopic projection is possible. It turns out that stereoscopic3D cylindrical panoramas are straightforward to create, the rest of thisdocument will discuss the process for computer generated stereoscopic 3Dpanoramas. A more recent addition to this document will further describean interactive viewer and show examples of stereoscopic panoramics of realworld environments.

As with all stereoscopic 3D projection it is necessary to create twoimages from slightly different viewpoints corresponding to the twohuman eyes. In this case we need to create two panoramics. Many renderingpackages support panoramic cameras but they are modelled after a centeredcamera, for a stereoscopic panorama one needs to create the panorama usinga cameras with a narrow horizontal field of view and wide vertical fieldof view. A large number of these slice renderings are calculated as thecamera is rotated, all the resulting slice renderings are stuck togetherto form the final panoramic. So in the following example 360 one degreeslices are created and stuck together to form the following panoramic.

The reason why one needs to revert to such a scheme is because thecameras, unlike a normal panoramic, don't rotate about the centerof the camera but rather about a rim with radius of half the intendedeye separation. Two possible topologies are illustrated below, inthe first the view direction vectors for each camera are parallelto each other, in the second they toed-in and meet at what will becalled the focal length (this is the distance for zero parallax).So to summarise, in either geometry the left and right eye camerarotates by some small amount (say 1 degree) and a rendering is performedwith a perspective camera with a 1 degree horizontal aperture anda larger vertical aperture (eg: 90, 120, <180). The exact settings thatwill ensure that the slices join properly is dependent on the renderingsoftware being used.

For the toe-in setup the final panoramic is automatically aligned, that is,objects that are the focal length away from the camera will be at zeroparallax and so the two panoramics can be projected without any horizontaloffset applied.

For parallel view directions the final panoramics need to be shiftedhorizontally with respect to each other. This can be seen in theimage below for an object at the focal distance. In order for it tobe at zero parallax the solid red line on the left image needs tolined up with the solid blue line on the right image.

The degree of horizontal shift is easy to calculate given the geometryabove. If r is half the eye separation then the angle theta is givenby

theta = 2 asin(r / focallength)

And so the pixel shift is just the proportion of this angle to 360degrees. The pixel shift can either be applied when joining the slicestogether to form the panoramic (recommended) or it could be appliedwithin the stereoscopic panorama viewer.

pixelshift = width theta / 2 pi

Notes

  • As with most stereoscopy one needs to choose the focal lengthand the eye separation. The focal length is related to the geometryin the scene, namely what distance should be at zero parallax while makingsure objects never come too close to the camera. For most safe viewingthe eye separation is taken to be 1/25 or 1/30 of the focal length.

  • By rotating the camera rig clockwise, the image slices can be addedsequentially from left to right to make up the final panoramics.

  • It isn't necessary to make the slices the exact size, one might madewider renderings and extract the central portion. One reason for thiscan be to ensure antialiasing at the edges is performed properly, thedetails are dependent on the rendering software.

  • The above discussion relates to cylindrical panoramic images, the sameapplies to spherical panoramas. However, the stereo pairs get increasinglydistorted as one moves towards to pole of the spherical map.

  • The horizontal aperture of the camera is the same as the angle betweenpairs of camera positions.

Cylindrical

Capture using one camera

It is possible to capture stereoscopic panoramic images by using one camera,generally with a wide angle lens. The camera rotates perpendicular to a circle,as shown below, in small steps.

A strip of pixels is extracted from each image, they are aligned next to eachother to create the left and right eye panoramic images. Depending on which pairof pixels is chosen, the effective eye separation for the panoramic imagescan be varied, see the inner circle above.

Example using PovRay

In order to facilitate the creation of the rotating camera rig inPovRay a camera include file was created as follows. This should beincluded into the PovRay scene file in place of any other cameraspecification. It makes a number of assumptions (for example, up is the y axis) but it gives the basic idea.

To see how one might use this here are the ini and pov files for theleft and right panoramics for a scene courtesy of Joseph Strout:test1left.ini,test1left.pov,test1right.ini,test1right.pov.The ini file creates a 360 frame animation with a 1 degree widecamera. The final panoramic in this case will be 3600 pixels by 1800 pixels.

Update (Nov 2007): a custom camera for PovRay that renders a stereoscopicpanoramic image pair directly.

Stereo-capable panoramic viewer

Writing a panoramic viewer based upon OpenGL is 'trivial', it onlyrequires a cylinder with the panoramic mapped as a texture. Writinga stereoscopic viewer is not much more difficult.

The main complication for high resolution panoramics is the texturememory available and the largest texture supported. For example a4096 by 2048 texture is usually going to require 32 MB. ManyOpenGL drivers place modest limits on the largest texture size, theway around any such restriction is to tile the panoramic in N by N pieces onthe cylinder.

Examples
LeftRight

Extensions, October 2003

The viewer originally written for cylindrical stereoscopic panoramic imageshas been extended as follows.

  • Support for spherical panoramics.

  • Support for panning over large planar stereoscopic images.

  • Removal of restrictions found in most other viewers (eg: QuickTime VR), inparticular it is possible to barrel roll, in other words, the virtual cameraneed not be upright. While this is useful in mono mode it has limited applicationwhen viewing stereoscopic panoramic pairs.

  • The viewer runs under Linux (with hardware OpenGL support)and Mac OS-X, others are almost certainly possible.

  • Support for multiple synced and optionally gen-locked machines has been implemented.A server and n clients are supported through TCP-IP communications, any useractions on the server is replicated on the clients. This has been tested onthe 8 machines in the VROOM environment.

  • Edge blending has been implemented to provide a double width display on a dualdisplay card. For examples see: Edgeblendingwith commodity projectors. This includes the ability to interactivelyvary the edge blending parameters, save them, and read them back when launchingthe application.

Some stunning examples of real world stereoscopic panoramics have been capturedby Peter Murphy. An example showing the left eye of a panoramic stereo pair isgiven below, this is a full spherical panoramic image,the original around 4000 pixels wide!

The following shows the left and right views from within the viewer,note that normally these would be displayed full screen on a dual displaycard and viewed through a dual projector passive stereo system. Using above average graphics cards (at the time of writing) this viewerwas readily able to display 4096 pixel stereo panoramic pairs at 30 framesper second.

And finally, two images showing the geometry of the underlying textured cylinder and sphere.

An obvious extension is to add computer generated aspects to the environmentsuch as avatars. To to this correctly the added geometry needs to be in thecorrect perspective, it may need to be occluded behind geometry in thepanoramic, and it needs to be illuminated in a consistent way with thelighting of the panoramic. The first step to achieving this is illustratedbelow, the sun position is determined, a ground plane(s) are positionedso any additional geometry can lie at the correct vertical position andmove into the foreground/distance correctly, and finally if the outlinesof objects in the scene are known such as the gravestone then any addedgeometry that movies behind that gravestone will be occluded by it.


Original scene courtesy of Peter Murphy.

With geometry overlaid, current primitives include line, box, plane, sphere, light.Note how the objects align in both eyes (as they should!). Note however that theground plane doesn't align with the lower ground level, it is actually at thelevel of the raised plot on the left.

Wireframe

Sun position for correct lighting of any added geometry

Extensions, April 2004

A number of changes were made to the performance and in addition two new map formatswere supported, namely high resolution stereo planar images and stereoscopic cubicmaps.

An example follows courtesy of Peter Murphy
Left eye cubic maps as unwrapped cube.


Side-by-Side stereo pairs for passive stereo projection.


Showing the cubic texture mesh.

Update August 2004

Added new cubic map type, now supports 6 face cubic maps as well as 4 face cubic maps. The performance has also been greatly improved, as well as the support for higher quality/resolution images. The largest cubic map attempted has been 4 x (4096x4096) in stereo with a frame rate of more than 75 fps (it is limited by vertical refresh synchronisation). Indeed, the frame rate is now not limited by thesize of the panoramic but by the display size and the camera aperture. The limitof the size of the panoramic that can be handled is dictated by system memory.The largest stereoscopic spherical map attempted is 8192 x 8192, again with a vertical refresh limited frame rate on a 1024x768 stereo display.

Update October 2004

Experiments in augmented characters filmed in stereo.

The Cylindrical Parallax Platformer Mac Os X

Update, Octover 2005

Interface with Intersense tracker

References

Shmuel Peleg,
Omnistereo: Panoramic Stereo Imaging
IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol 15, No 3, March 2001.

S. Tzavidas and A.K. Katsaggelos,
Multicamera Setup for Generating Stereo Panoramic Video,
Proc. 2002 SPIE Conference on VCIP, San Jose, CA, Jan. 2002.

H.C. Huang and Y.P. Hung.
Panoramic stereo imaging system with automatic disparity warping and seaming.
In Proceedings of International Conference on Image Processing and Character Recognition,
ICS'96, pages 48-55, Taiwan, ROC, December 1996.

J. Gluckman, S. Nayar, and K. Thoresz.
Real-time omnidirectional and panoramic stereo.
In DARPA IUW-98, pages 299-303, Monterey, California, November 1998. Morgan

S. Peleg and M. Ben-Ezra.
Stereo panorama with a single camera.
In IEEE Conference on Computer Vision andPattern Recognition, pages 395-401, Ft. Collins, Colorado, June 1999.

Augmented Reality provides awesome experiences and it changes the way we see our world. AR applications are being used in several areas like education, healthcare, shopping, gaming etc. Here, we will discuss the top 15 augmented reality development SDK (Software Development Kits), which are used to develop AR applications.

1. Vuforia

Vuforia is the most popular Augmented Reality SDK, which provides great features to develop Augmented Reality applications for phones and tablets. Vuforia provides the benefits of adding Computer Vision in Android, IOS and UWP applications, and allows us to create efficient AR experiences that realistically interact with objects in the real-world environment. It is natively integrated with the Unity Game Engine and can be downloaded and installed via Unity Installer. It includes features:

  • Model Targets are the physical objects which are recognized and tracked with the 3D model of the object
  • Ground Plane is used for placing objects on horizontal space in our environment
  • Image Targets or flat images are used for print media and product packaging
  • VuMarks are used for augmenting and identifying objects as part of a series, such as consumer products or toys
  • Multi-Targets are used for collections of Target Images in a defined arrangement
  • Cylinder Targets enable us to use bottles and cans, or any cylindrical image, in AR apps
  • User Defined Targets provide flexibility to use camera images, captured by us, as Image Targets

Type: Free + Commercial SDK

Current SDK Version: 7

Supported Platforms: Android, IOS, Windows Phone, Unity Editor

Pricing: Free (Limited Version With Watermark), Paid (Version & Price depend on License Category)

Download Vuforia SDK Here.

2. Wikitude

Wikitude is the oldest augmented reality SDK tool, which provides the best experiences for geolocation technologies, Video Overlay, Image Recognition, Image tracking, Location-Based AR, and 3D Model Rendering etc. Wikitude SDK 6 (Latest Version) includes SLAM (Simultaneous Localization And Mapping), a technology which enables Object Tracking and Recognition, and Markerless Instant Tracking. Wikitude SDK includes several more features:

  • Offline Object Recognition and Object Tracking
  • Offline 2D Image Recognition and 2D Image tracking
  • Cloud 2D image recognition and tracking (in addition, extended tracking in the latest SDK)
  • Distance to Physical Target
  • Geo-Location Scenes
  • Geo-Fence Triggers
  • Radar UI Element
  • Relative Locations
  • Distance-Based Scaling
  • Augmentations and Visualizations (including Full customization of AR view, Integrated Rendering Engine, Texts, Images, Animated Images (or Sprites), Videos (including Transparent Videos), Sound, HTML Widgets, Static and Animated 3D Models)

Type: Free + Commercial SDK

Current SDK Version: 7

Supported Platforms: Android, IOS, Windows Phone, Unity Editor, Smart Glasses

Pricing: Free (Limited Version With Watermark), Paid (Version & Price depend on License Category)

Download Wikitude SDK Here.

3. ARToolKit

ARToolKit is an open-source library used to create augmented reality applications that detect 2D Images and overlay virtual 3D objects in the real-world environment. ARToolKit SDK is maintained as an open-source project, which is hosted on the Github Platform. It is a multi-platform SDK; it runs on Windows, Mac OS, Linux, Android, and iOS. It includes features:

  • Supports square marker, multimarker, and 2D barcode
  • Robust and Efficient Tracking (including Natural Feature Tracking Functionality)
  • Simultaneous Tracking
  • Provides Solid Camera Calibration Support
  • Stereo Camera Support
  • Multiple Language Supported
  • Provides Unity3D and OpenSceneGraph Support
  • Optimized for Mobile Devices (Android, iOS)
  • ARToolkit uses Computer Vision methods and techniques to calculate the device’s camera position and orientation relative to the flat textures surfaces or square shapes, which allow programmers to overlay virtual objects
  • Provides fast and precise tracking

Type: Open Source

Current SDK Version: 5

Supported Platforms: Android, IOS, Windows Phone, Windows (PC), Mac OS, Linux, Unity Editor

Pricing: Free + Open Source

Download ARToolKit SDK Here.

4. ARKit

IOS SDK 11 has introduced a new framework called ARKit, which enables developers to build great augmented reality experiences for Apple’s iPhone and iPad devices. Some of its features are:

  • It combines advanced scene processing, device motion tracking, and camera scene capture, and display conveniences which simplify the development of AR applications
  • It provides fast and stable motion tracking by using Visual-Inertial Odometry (VIO), which blends the device’s Core Motion data and camera sensor data to get a better understanding of how the device is moving around the real space
  • Provides better Light Estimation, because it uses the camera sensor to better estimate the amount of light present in the current scene, and applies that estimated amount of lighting on virtual objects
  • Supports SceneKit and other third-party tools like Unity and Unreal Engine
  • Provides High-Performance, because it runs on A9 and later processors (which are known for excellent performance), allows developers to create detailed and high-graphics contents

Type: Free

Current API Version: 1.5

Supported Platforms: iOS

Pricing: Free

Download ARKit SDK Here.

5. Kudan

Kudan is the Augmented Reality SDK used to create AR apps for mobile devices like iOS and Android. Written in C++ and Assembly, it provides the benefits of fast execution, robust performance, and minimum memory footprints, which make Kudan the main rival of Vuforia SDK. It uses SLAM technology to recognize 3D objects and 2D images. Kudan SDK includes the following features:

  • Provides native platform APIs, such as Objective-C for iOS, and Java for Android
  • Supports the Unity Game Engine and enables developers to create Cross-Platform AR apps
  • Used for Advanced IOT (Internet of Things) and AI (Artificial Intelligence)
  • Makes use of Instantaneous SLAM (Simultaneous Localization And Mapping) with high-quality models
  • Can be used for both marker and marker-less operations
  • Flexible to work on mobiles, Head-Mounted Displays, and Robotics applications

Type: Free + Commercial SDK

Current SDK Version: 1.5

Platform: Android, IOS, Windows Phone, Unity3D Cross-Platform Development

Pricing: Free (Unlimited Version With Watermark), Paid (Version & Price depend on License Category)

Download Kudan SDK Here.

6. EasyAR

EasyAR comes with two editions: EasyAR SDK Basic and Pro. EasyAR SDK Basic supports planar targets, smooth loading and identification for 1000+ targets, video playback which is based on HW codecs, streaming and transparent videos, QR code recognition, multitarget tracking (simultaneously).

EasyAR Pro was introduced in SDK version 2.0 and it contains all EasyAR SDK Basic features and adds more features like 3D object tracking, SLAM, and screen recording.

Type: Free

Current SDK Version: 2

Platform: Android, IOS, Windows (PC), Mac, Linux, Windows Mobile Unity Editor

Pricing: Free (Limited Version With No Watermark), Paid (Unlimited Version, $499/License Key)

Download EasyAR SDK Here.

7. MaxST

MaxST SDK is an AR engine used to develop augmented reality applications. It comes with the all-in-one SDK package and includes 5 main features.

  • Image Tracker

It tracks and recognizes planar target images, transparent videos, 3D models, and 3D animations

  • Instant Tracker

It finds the planar or flat surface through a camera frame and scans the surroundings so that we can accurately place 3D objects onto the surface with correct positioning and alignment.

  • Visual SLAM

It creates and saves 3-dimensional maps of target spaces.

  • Object Tracker

It loads map files (which were created and stored with Visual SLAM) and overlay Augmented Reality experiences on them.

  • QR/Barcode Scanner

It recognizes barcodes and also QR codes.

Type: Free + Commercial SDK

Current SDK Version: 3

Platform: Android, IOS, Unity Editor, Windows (PC), Mac OS, Smart Glasses

Pricing: Free (Unlimited Version With Watermark), Paid (Version & Price depend on License Category)

Download MaxSt SDK Here.

8. Xzimg

Xzimg provides three SDK products to create AR applications for mobile devices:

  • Xzimg Augmented Face

It consists of high-quality face tracking features. Xzimg Augmented Face is an efficient tool to create AR face-tracking experiences.

  • Xzimg Augmented Vision

It consists of high-quality markers and image tracking functionalities which enables the development of Augmented Vision based apps, like Car Eco-systems, industrial prototypes etc

  • Xzimg Magic Face

It consists of high-quality deformable face tracking features which provide robust and real-time AR experiences. Xzimg Magic Face is an efficient tool to create make-up and face replacement based applications.

Applications developed on these three SDKs can be deployed on Windows (PC), Mobiles (Android & iOS), and HTML5 compliant browser through the Unity plugin system. Finally, the trial versions of these SDKs are free, and we can only use for demonstrations.

Type: Free + Commercial SDK

Platform: Android, IOS, Unity Editor

Pricing: Free (No Application License, Watermark Added), Paid (Unlimited Application Licenses, Price: €1600 single-user license)

Download XZimg SDK Here.

9. NyARToolKit

It is the augmented reality library based on ARToolKit. It is the shorter and simplified version of ARToolKit, currently used for object or image recognition and natural feature tracking. The library is easy to integrate, but its English version is not currently available.

Type: Open Source

Current Version: 5

Platform: Windows (PC), Unity Editor

Pricing: Free

Download Source Code Here.

10. ARCore

Google has released its own augmented reality Software Development Kit (SDK) for Android Developers. It’s built on Tango technology but works across Android devices without having to add any hardware system. ARCore provides three key features to connect virtual objects with the real world:

  • Motion Tracking allows the device (tablet or phone) to understand and track its current position relative to the real world environment
  • Environmental Understanding enables the device to detect the location and size of horizontal and flat surfaces (ground or table)
  • Light Estimation allows the phone to analyze and estimate the current lighting conditions of surroundings.

ARCore API is now part of Android SDK 7 and above. We can download and install it from the Android SDK Manager.

Type: Free

Current Version: 1

Platform: Android, Android NDK, Unity, Unreal, Web, Java/OpenGL

Pricing: Free

Download ARCore Here.

The Cylindrical Parallax Platformer Mac Os Catalina

11. AR-Media

ARMedia SDK consists of tracking and rendering modules that can be used to implement different tracking and recognition methods, including 3D object recognition, 2D object recognition, Planar Image, Geo-Location and Motion Tracking. ARMedia SDK also identifies complex 3D objects regardless of their geometry and size.

The SDK also provides a Unity Plugin, which allows us to integrate all tracking features in Unity 3D applications.

Type: Free + Commercial SDK

Current SDK Version: 2

Platform: Android, IOS, Windows Phone, Web, Windows (PC), Mac OS, Linux, Unity Editor

Pricing: Free (Limited), Paid (Price depend on the License Category)

Download ARMedia SDK Here.

12. Metaio

A company originated by Thomas Alt and Peter Meier and now acquired by Apple, also provides augmented reality SDK. The free version (with a watermark) is supported on Windows, iOS, and Android with an additional Unity3D plugin. Metaio SDK consists of its own scripting language called AREL (Augmented Reality Experience Language). AREL enables you to develop Augmented Reality apps using Web Technologies like XML, HTML5, Javascript. It includes features:

  • 2D Image Recognition
  • 3D Image Recognition
  • Face Tracking
  • Location Tracking
  • SLAM (Simultaneous Location And Mapping)
  • Barcode Scanning
  • QR Code Scanning
  • Continuous Offline and Online Visual Search
  • Gesture Detection

Type: Free + Commercial SDK

Current SDK Version: 6

Platforms: Android, IOS, Web, Windows (PC), Mac OS, Unity Editor, Google Glass, Epson Moverio BT-200 and Vuzix M-100

Pricing: Free (Watermark), Paid (Unlimited with No Watermark)

Download Metaio SDK Here.

13. Aurasma

Aurasma is an augmented reality development platform available as SDK and as a free mobile app for iOS and Android. Using the device’s camera, compass, accelerometer, GPS, and internet connection, Aurasma technology combines the image recognition and conceptual understanding of the 3D world to identify and recognize objects and images and merge augmented reality experiences into the scene.

Type: Free + Commercial SDK

Current SDK Version: 3

Platform: Android, IOS, Unity Editor

Download Aurasma SDK Here

14. CraftAR

Catchoom CraftAR SDK allows us to create our own augmented reality experiences by linking real-world objects like products or magazines to videos, websites and 3D models. It includes features:

  • Image Recognition
  • Open AR experience related to matched item
  • Switch between Single Shot Mode (one photo) and Finder Mode (Continuously Scanning)
  • Works on both offline and online modes
  • Can run offline with no cloud support necessary and one can access the cloud whenever needed

Type: Free + Commercial SDK

Current SDK Version: 2

Platform: Android, IOS, Web, Windows (PC), Mac OS, Linux, Unity Editor

Pricing: Free (Limited Version With 20 Images and 1000 Cloud Visual Scans), Paid (Version limitation & Price depend on the License Category)

Download CraftAR Here.

15. ARLab

ARLab SDK is also used to create augmented reality applications for Android and iOS devices. It focuses on two of its major features: Image Matching and Image Tracking. More features include:

  • Real-Time Image Recognition
  • Match multiple images at a time
  • Works Offline (no internet connection required)
  • QR codes detection
  • Real-Time Image Tracking
  • Extreme or very high angles (90-degree rotations)

Type: Free + Commercial SDK

Current SDK Version: 1

Platform: Android, IOS

Pricing: Paid (€299/per app)

Download ARLab Framework Here.

We hope this list helps you make your choice. These are not the only ones out there but they are surely the most popular and usually that popularity is justified. If you have experience with any of this feel free to share your impressions.

Vladimir Ilic is author at LeraBlog. The author's views are entirely their own and may not reflect the views and opinions of LeraBlog staff.