The Eye Tribe [Copenhagen]: Lead Android Developer - Job Board

About The Eye Tribe
The Eye Tribe is an exciting startup based in Copenhagen developing eye tracking technology that can be integrated into a wide range of products like mobile phones, tablets, computers, monitors and cars. The Eye Tribe has been named Cool Vendor 2012 by Gartner, and has won numerous awards since the company was founded in 2011.
About the job
The Lead Android developer will be responsible for leveraging our technology to the Android platform, working together with our eye tracking algorithm experts to build the next generation Human-Computer Interaction technology for the Android platform.
We seek a professional and dedicated engineer with demonstrated ability for building elegant and efficient solutions. You will be part of an international team in a rapidly growing company set for global expansion. This is a unique opportunity to join early and grow with the company.
Required Expertise/Experience
Professional software development experience (5+ years)
Relevant Android and Linux software development experience
System architecture and Object Oriented design
Excellent verbal and written communication skills
Technical skills – Required
Experience building Android applications and/or subsystems
Strong experience in C/C++/Java programming
Development of kernel modules and/or system libraries
Android ROM development
Technical skills – Desired
Development of Linux device drivers
Hardware acceleration (e.g., Renderscript, OpenCL, OpenGL ES, Tegra, ARM)
Hardware integration (CMOS camera sensors, MIPI, I2C)
Image processing/analysis (preferably using OpenCV)
Education
Bachelor or higher in Computer Science or similar.
Apply
Submit resume and a cover letter to jobs(at)theeyetribe(dot)com and we will contact you ASAP.

Related

[Q] Technical details about WP7 internal design?

Hello,
Is there a good source for technical information about WP7 internals/design, like for example how the UI was engineered, how they do multitasking on the snapdragon CPU?
What I mean is, I'm looking for the kind of information that used to be dished out by MS during Vista development on sites like channel9.msdn.com where they told us about how they implemented stuff like a brand new networking stack, the AERO Glass UI, finer grained kernel locking and what not.
I'd like to know about stuff for WP7 like how they achieve the smooth UI with 2D/3D animations, techniques like state machines/whatever, separate UI threads, what developer tools they use to build the OS, and so on...
It was nice during the Vista days with Jim Allchin, but sadly since Win7 and Steve Sinofsky (hope I got the names right) they seemed to have clammed up.
Thanks,
Vishal

[INFORMATORY PURPOSE] Broadcom Releases VideoCore IV Source

SOURCE: http://www.xda-developers.com/android/broadcom-releases-videocore-source-ported-to-bcm21553-socs/
LINK: http://blog.broadcom.com/chip-desig...ves-developers-keys-to-the-videocore-kingdom/
The community of open source mobile developers around the world are a vocal bunch – and here at Broadcom we’ve heard their call.
To date, there’s been a dearth of documentation and vendor-developed open source drivers for the graphics subsystems of mobile systems-on-a-chip (SoC). Binary drivers prevent users from fixing bugs or otherwise improving the graphics stack, and complicate the task of porting new operating systems to a device without vendor assistance.
But that’s changing, and Broadcom is taking up the cause.
Today, Broadcom is releasing the full source of the OpenGL ES 1.1 and 2.0 driver stack for the Broadcom VideoCore® IV 3D graphics subsystem used in the BCM21553 3G integrated baseband SoC. VideoCore IV is used in many Broadcom products, including the BCM2835 application processor, which runs the popular Raspberry Pi microcomputer.
The trend over the last decade has leaned towards greater openness in desktop graphics, and the same is happening in the mobile space. Broadcom — a long-time leader in graphics processors — is a frontrunner in this movement and aims to contribute to its momentum.
The VideoCore driver stack, which includes a complete standards-compliant compiler for the OpenGL® ES Shading Language, is provided under a 3-clause BSD license; the source release is accompanied by complete register-level documentation for the graphics engine.
HOPE THIS HELPS OUR BELOVED DEVELOPERS AT SAMSUNG GALAXY GRAND DUOS (GT-I9082) FORUM TO DEVELOP MORE AWESOME ROMs!
Good Luck Guyz :good:

Overlooked Mobile Application Testing Condition

Testing iOS and Android software presents unique challenges that require unique test conditions.
Testing iOS and Android software presents unique challenges. As easily portable devices, smartphones and tablets are used in a variety of settings, and wireless connectivity may widely fluctuate and acutely affect the performance of any applications in use. Unlike with PCs that have wired connections, dev/test teams cannot assume relatively stable network conditions when crafting a mobile game, messaging client or news reader.
Moreover, despite enormous leaps forward in CPU and GPU design since the iPhone’s debut in 2007, mobile hardware still is considerably less capable than desktops or laptops, especially when it comes to device memory. In practice, this can means suboptimal performance when working with mostly interpreted languages such as JavaScript (in contrast, PCs often have RAM to spare and can accordingly overcome flaws in language design), as well as frequent crashes.
Still, the difficulties of have not discouraged shops from trying their luck. There are more than 1 million apps in both the Apple App Store and Google Play, and Microsoft has revealed that the count for the Windows Phone Store is now at 300,000. With the market moving toward mature software that takes advantage of increasingly powerful endpoints and addresses functionalities once reserved for PCs, will be instrumental for fostering collaboration and coordinating both manual and automated tests.
How can developers and QA engineers deal with so many mobile devices and platforms?
Creating software for mobile devices has never been simple or easy. In the early days, there were incredible constraints on hardware as well as relatively few APIs and toolkits for expediting development. Over the years, some of these challenges have been overtaken by new ones surrounding sustainable monetization and consistency across a wide range of platforms.
Teams have to address users who may delete an app if it crashes even once, but how can they do so when there are so many device/OS combinations to account for? Native vs. HTML5 development is a conversation for another time – let’s look at how an application developed with either of these methodologies might be tested. Many of the issues that apps face in the wild originate from overlooked mobile testing conditions, which if implemented may have produced a more polished product. Here are a few to keep in mind:
Too much manual testing: Manual tests aren’t bad – they’re critical to many QA workflows. But teams can easily become over-reliant on them, which doesn’t scale well given how fragmented the mobile ecosystem is. Android KitKat, for example, only runs on 20 percent of Android devices as of August 2014. Automated processes are needed.
Insufficient simulation of real-world conditions: As discussed earlier, smartphones and tablets don’t exist in a vacuum. They’re carried inside buildings with poor reception, or packed along into remote regions with only 2G or 3G coverage. Tests have to account for these realities as well as limitations on memory, screen resolution and battery life.
Low attention to region/language settings: This one flies under the radar since many developers target only specific sets of users. For apps with an international or multilingual audience, it is important to see if the platform in question has a translation option and whether app performance is affected by switching from setting to another.
Overall, mobile testing is about scalability for many devices and consistency despite constraints. A blend of automated and manual tests is usually the best way forward.
“Manual testing is a definite need, however, there are so many devices and combinations in the market today that it is necessary to use automated mobile tools as well,” stated a software engineer from CallFlow, in a post on LinkedIn. “User expects (sic) the application to stay on, connected, and perform at all times. To meet these expectations, the mobile testing strategy should include real device testing under various real world conditions. That includes various signal strengths, networks, speed and more.”
The stakes for mobile testing: Even big companies can miss bugs as apps scale
Facebook, with its 1 billion users, is obviously an outlier in the software world, but its recent battle with a bug in its iOS app illustrates how mobile testing requires tremendous time and effort as well as top-notch tools. The social network’s engineers were noticing an issue related to the Apple CoreData System, but due to the size and rapid evolution of the Facebook codebase, parsing the crash reports proved a monumental undertaking.
“[C]ertain fundamental programming challenges inevitably become more difficult with scale,” explained Slobodan Predolac and Nicolas Spiegelberg, engineers at Facebook. “Debugging, for example, can prove difficult even if you can reliably reproduce the problem – and this difficulty increases when debugging a highly visible but nondeterministic issue in a rapidly changing codebase.”
Ultimately, the Facebook team was able to identify the issue through close collaboration and a focus on programming fundamentals. The fix may have reduced the app’s iOS crash rate by 50 percent.
Users often have little patience for app crashes, so this is an important development. While most other shops won’t operate at Facebook’s scale, they’ll still have to deal with similar performance issues that could manifest due to adverse real-world conditions and/or other flaws in the code. A test management solution enables developers and software testers to scale their workflows and find defects early and often at low cost.

Advice, developing a high end video compression codec on Shield Android TV for Camera

Advice, developing a high end video compression codec on Shield Android TV for Camera Acquisition and HQ video.
Hi
Aims
I am researching doing a high end streamlined video compression/decompression codec that can be installed and registered under Android, and be available to third party camera, editing and video apps.* Shield seems like a good top end development target.* I am hoping it will be able to compress 4k+ video streams, with small file sizes and reduced processing overheads.
Even though it is meant to be more for high end camera acquisition on Android in general, it also has other uses on the web.
I am trying to find out general, and detailed, information to see what I need to address.* I'm a newbie to all this, from back in the days that C++ was new and untaught in my college. I'm going to have to reteach myself programming, but have a lot of knowledge on the design side due to previous work.
Codec Programming?
So basically, I need advice on broad programming info on programming and registering a codec on Android and gp-gpu use?
But with Android things seem a bit more complicated to get performance due to the way things are structured:
Backend Camera Streamlining?
Previous high level camera projects have failed due to the underlying restrictions of the android camera interface and customisations from phone to phone, but also Android's slow nature. This is an attempt to bypass this with a high performance codec section.* L and M, reportedly address the deficiencies somewhat, but for the codec I realise the data rate of video data coming in might be poor, and* I might have to write a back end to acquire the frames from the hardware to the codec quick enough, which I don't want to do, but if I can't get frame data delivery fast enough I will have to look at it. I want to use mainly the GPU or other processing units instead of the main processor, for power efficiency and speed, but realise nothing is simple. All that sort of stuff that you have to do because it was not done right in the first place. So, avoiding going through slower high end camera interfaces as much as possible. I understand it is all based on a standard Linux camera API. If the camera software does not have to be rewritten and it can deliver frame data at streamlined timely speeds to a codec, then I can avoid much of this. So, I probably need advice in these things too.
Backend Storage Streamlining?
Now, on the other side we have storage**Hopefully the data rate can be small enough to avoid issues, but that is unlikely on a 4k-8k frame and would need advice on this too.
JavaScript to Android, Android to JavaScript transportability?
I actually want to develop the core of it within JavaScript primarily, for transportable use on the web and Firefox OS, so will have to find out the best way to transfer it to Android for compilation? As I know next to nothing about these new languages, it will be an uphill learning curve. As I understand, JavaScript syntax is separate from Java, and not a even a logical subset, which makes life hard.
----------
Anyway, it is a shame we don't have a kick starter like funding scheme, to pay a good programmer to do most of the background stuff, and upgrade the Linux code and drivers, so anybody can use the new code with any codec and camera app combination. My main interest is my own codec, not all the other stuff, that is really fixing Android and Linux camera code, which would help everybody.
This is not an official project start, just implementation research.
If anybody knows of anybody that can contribute, please direct them here?
Thanks.
Stevio2 said:
Advice, developing a high end video compression codec on Shield Android TV for Camera Acquisition and HQ video.
Hi
Aims
I am researching doing a high end streamlined video compression/decompression codec that can be installed and registered under Android, and be available to third party camera, editing and video apps.* Shield seems like a good top end development target.* I am hoping it will be able to compress 4k+ video streams, with small file sizes and reduced processing overheads.
Even though it is meant to be more for high end camera acquisition on Android in general, it also has other uses on the web.
I am trying to find out general, and detailed, information to see what I need to address.* I'm a newbie to all this, from back in the days that C++ was new and untaught in my college. I'm going to have to reteach myself programming, but have a lot of knowledge on the design side due to previous work.
Codec Programming?
So basically, I need advice on broad programming info on programming and registering a codec on Android and gp-gpu use?
But with Android things seem a bit more complicated to get performance due to the way things are structured:
Backend Camera Streamlining?
Previous high level camera projects have failed due to the underlying restrictions of the android camera interface and customisations from phone to phone, but also Android's slow nature. This is an attempt to bypass this with a high performance codec section.* L and M, reportedly address the deficiencies somewhat, but for the codec I realise the data rate of video data coming in might be poor, and* I might have to write a back end to acquire the frames from the hardware to the codec quick enough, which I don't want to do, but if I can't get frame data delivery fast enough I will have to look at it. I want to use mainly the GPU or other processing units instead of the main processor, for power efficiency and speed, but realise nothing is simple. All that sort of stuff that you have to do because it was not done right in the first place. So, avoiding going through slower high end camera interfaces as much as possible. I understand it is all based on a standard Linux camera API. If the camera software does not have to be rewritten and it can deliver frame data at streamlined timely speeds to a codec, then I can avoid much of this. So, I probably need advice in these things too.
Backend Storage Streamlining?
Now, on the other side we have storage**Hopefully the data rate can be small enough to avoid issues, but that is unlikely on a 4k-8k frame and would need advice on this too.
JavaScript to Android, Android to JavaScript transportability?
I actually want to develop the core of it within JavaScript primarily, for transportable use on the web and Firefox OS, so will have to find out the best way to transfer it to Android for compilation? As I know next to nothing about these new languages, it will be an uphill learning curve. As I understand, JavaScript syntax is separate from Java, and not a even a logical subset, which makes life hard.
----------
Anyway, it is a shame we don't have a kick starter like funding scheme, to pay a good programmer to do most of the background stuff, and upgrade the Linux code and drivers, so anybody can use the new code with any codec and camera app combination. My main interest is my own codec, not all the other stuff, that is really fixing Android and Linux camera code, which would help everybody.
This is not an official project start, just implementation research.
Click to expand...
Click to collapse
I wish you all the luck in your endeavour, as this sounds really interesting, and different........
Saying that, i dont think your suppose to post anything in the dev thread that is'nt an actuall work, im just giving you a heads up, incase a moderator might come along............also i could be wrong, if this has changed recently
Your best bet i reckon, is to post in this thread
http://forum.xda-developers.com/general/general
Its the main general thread of the entire xda, so you'll have more eyeballs.........and maybe a better chance of getting a "start in the right direction" from someone knowledgeable
Ive also read many android technical question being asked at the "stackexchange" website, by devs working on their projects, so that might be another avenue to explore if your unlucky here
Anyways, wish you luck with this
Development Forums (ones with the word development in the title) - For Developers to post release threads e.g. ROMs and Kernels including modifications to kernels, bootloaders, ROMs, etc., as well as R&D development discussion threads designed with an end goal
Click to expand...
Click to collapse
Thanks. From forum discussion rules. I mistook this to mean development research discussion as well. If it actually should, then I'm happy for it to be moved to general.
Stevio2 said:
Thanks. From forum discussion rules. I mistook this to mean development research discussion as well. If it actually should, then I'm happy for it to be moved to general.
Click to expand...
Click to collapse
The Shield is based on the Nvidia X1 chip. Nvidia also just released the Jetson TX1 development board which is similar. If you register as a developer with nvidia (which is easy) you get access to all the dev docs (including video codec docs) for the TX1, which boots Ubuntu. That should be a good start.
Sounds good, it was an andriod development related question though (using shield hardware under android so it can be shared with different platforms, you just can do more on the shield hardware. Maybe there is a Linux overlap with andriod in codec support but I doubt it isva full story. I am interested in dealing with 8k content too. There is a way to do a 8k over HDMI 2, but muchntoo involved at this stage, the display has to also be modified or an adaptor made to interface to a future 8k interface.
I have just realised the shield might be good for touch table work (not so good on the software side as there are no established software base to work on). I located a new good cheap fine grained more transparent touch surface overlay technology a little while ago that is being used to do cheap touch tables in Asia. Using a 4 subpixel screen I can do a semi 8k display out of a 4k (though you can't directly access the white pixel through hdmi, which is useless). There is also now 6 color pixels. A firmware change might allow a display to sub pixel address. However, you can get panel frames without the internal section and get direct access to the internal panel interface (why hdcp is probably useless). Anyway, 8k would yeild 16k, a nice minimum for a 80 inch table, with OLED, or projector. Reprogramming a display to use display port/thuderbolt interfaces on a display would be more useful. I tried to negotiate access to a 16k projector chip once to connect up to a low powered processing array, but got nowhere. Henceforth I've been dealing with embedded machine code level concerns for decades off and on and let the newer high level language and OS stuff (like C# and Linux) go, due to health issues.
Another intetesting thing that can be done with a shield, is it can be hooked up to a camera head and rigged up to be a camera (or the next version). Problem is that USB 3 is useless compared to Thuderbolt 3 etc (though camera head computet interface standards take a while to catch up). My codec could be used for recording. We used to do this with PC's but the Shield offers a much better power consumption. There is Linux software around to do this, but the development board is half powered and expensive.
Bump
Well, when I said bump, I didn't really mean to move it to a third subforum
Seriously, I want to do a less than 20mbit/s 8k visually lossless codec. But at the moment I'm waiting to get checked out for dementia, which explains a lot about the last few decades and my decreasing amount I can do (beta amyloid in particular builds up for 10-20 years with low grade symptoms before it gets seriouse enough that it is can be picked up on older scanning, by then it has permanent problems. Apart from other types of dementia). At this stage I can't do much much of the time.
Anyway, as the thread has skipped to a second forum in two days, any more short cut advice is welcome.

Hololens2

Are you amazed after seeing it's trailer
?
Don't get it for sure...
Hololens 2 is the future of AI/Mixed reality
​
Definitely. As a developer, I feel Hololens 2 has a lot to offer.
Pairing HoloLens 2 with Unity's real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways.

Categories

Resources