This year I have two ideas for GSoC:
1) Image registration and blending for scikit-image library.
This is a complex project that involves adding a few cool vision algorithms to the library and than combining them to a photomontage framework.
You can find more information in the proposal:
http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013/fm/10001
2) High Dynamic Range imaging for OpenCV
It would also be great to continue working with HDR imaging and have it in OpenCV.
The implemented functionality can then be easily used in nomacs.
Proposal:
http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013/fm/8001
Fedor Morozov
Monday, May 13, 2013
Tuesday, August 21, 2012
HDR imaging with nomacs
After GSoC 2012 nomacs supports displaying HDR images and creating them from set of LDR images with different exposure.
You can find HDR version of nomacs here.
You can find HDR version of nomacs here.
HDR support will be added as a plugin to the first stable version of nomacs.
Monday, August 20, 2012
GSoC 2012 impressions
GSoC was a great experience and I sure want to participate another time next year (I have a few ideas about that).
I found out, that opensource programming is a great thing, and it's not so difficult to start working on the project you like.
I've learned lots of new things about image processing (especially hdr imaging) and new tools like cmake and opencv.
Also, I noticed, that my coding style has changed. The latter code is more clear and has much less unclear pointer manipulations, and now I'm used to K&R indent style.
I want to thank my mentors, Markus and Stefan. They did a great job consulting me and testing my code.
By the way, having two mentors is very cool, because you get two know both of them, and two people have more knowledge and respond faster, than one.
I'm going to continue working on nomacs after the program ends. First of all, helping to prepare it to the first stable release, separating HDR support into a few dynamic modules to make it easier to distribute nomacs and making some improvements in HDR processing.
Here's my official final evaluation that will be submitted to Google:
I found out, that opensource programming is a great thing, and it's not so difficult to start working on the project you like.
I've learned lots of new things about image processing (especially hdr imaging) and new tools like cmake and opencv.
Also, I noticed, that my coding style has changed. The latter code is more clear and has much less unclear pointer manipulations, and now I'm used to K&R indent style.
I want to thank my mentors, Markus and Stefan. They did a great job consulting me and testing my code.
By the way, having two mentors is very cool, because you get two know both of them, and two people have more knowledge and respond faster, than one.
I'm going to continue working on nomacs after the program ends. First of all, helping to prepare it to the first stable release, separating HDR support into a few dynamic modules to make it easier to distribute nomacs and making some improvements in HDR processing.
Here's my official final evaluation that will be submitted to Google:
pfstmo
Pfstmo is a great library of tone-mapping operators, maintained by Grzegorz Krawczyk and distributed under GNU GPL license.
I used four tmo's from it: photographic tonemapping (Reinhard et al.), gradient domain compression (Fattal et al.) and two operators by Rafal Mantiuk, who is also one of pfstmo developers.
Unfortunately, pfstmo is not supported on Windows, so I had to create the cmake file myself. You can fing pfstmo for Windows in my svn branch.
While trying to compile it from scratch you may encounter following errors:
- Arrays of variable length are present in C99, but not supported by MSVS. Replaced those with new/delete.
- You may encounter this error in Debug configuration:
958: delete csf_lut;
959: delete skip_lut;
- Mantiuk'08 operator also requires libgsl. You can find it here.
- Fattal's operator can use fftw for fast Fourier transformation. It is also required for Durand's bilateral filter operator. I did not use it.
I used four tmo's from it: photographic tonemapping (Reinhard et al.), gradient domain compression (Fattal et al.) and two operators by Rafal Mantiuk, who is also one of pfstmo developers.
Unfortunately, pfstmo is not supported on Windows, so I had to create the cmake file myself. You can fing pfstmo for Windows in my svn branch.
While trying to compile it from scratch you may encounter following errors:
- Arrays of variable length are present in C99, but not supported by MSVS. Replaced those with new/delete.
- You may encounter this error in Debug configuration:
Debug Assertion Failed dbgdel.cpp Line 52I have no idea about how it happens, but it is solved by deleting theese two lines tmo_mantiuk08.cpp:
Expression: _BLOCK_TYPE_IS_VALID(pHead->nBlockUse)
958: delete csf_lut;
959: delete skip_lut;
- Mantiuk'08 operator also requires libgsl. You can find it here.
- Fattal's operator can use fftw for fast Fourier transformation. It is also required for Durand's bilateral filter operator. I did not use it.
HDR formats
This is a short post about reading and writing most popular hdr formats.
Three most common hdr imaging formats are radiance hdr (.hdr, .pic), openexr (.exr) and tif.
Radiance hdr file consists of a plain text header, that holds some information like image resolution, and pixel data in rgbe (red, green, blue and exponent) format, that may be run-lenght encoded to provide compression.
You can find it's original code in Radiance software or implement it yourself, it's relatively easy.
You can also take a look at my code, or just modify it to use in your application.
OpenExr - Extended range format - stores color values as 16-bit or 32-bit floating-point numbers.
The openexr library provides support for 16-bit number representation and for reading and writing .exr files.
It is distributed for both Windows and Linux and can be easily built.
Using Imf::RgbaInputFile and Imf::RgbaOutputFile (ImfRgbaFile.h) is the easiest way to work with exr files, I used scanline-based io.
Tiff container is also a good way to store your hdr images. First, it supports 32-bit floating point pixel data, but this takes 12 bytes per pixel, making the images very heavy. Another way is to use LogLuv encoding, that uses 24 of 32 bits per pixel and so is the cheapest of popular hdr formats.
Tiff support is provided by libtiff. Previous versions of the library have cmakelists file, so you can easily build it on Windows.
I used libtiff version distributed with opencv.
Use tiffio.h to work with tif images. If you are also using opencv you'll have to undef COMPRESSION_NONE, or it will create a syntax error in opencv code.
Most information about tiff image is stored in so-called tags, managed with TIFFGetField and TIFFSetField.
TIFFTAG_BITSPERSAMPLE can be used to check if it is a 32-bit image and TIFFTAG_PHOTOMETRIC can be used to check if it uses LogLuv (PHOTOMETRIC_LOGL for gray images or PHOTOMETRIC_LOGLUV).
TIFFReadEncodedStrip and TIFFWriteEncodedStrip are the two functions to use for io.
This was some practical information about using openexr and libtiff, nomacs code for that is here.
To get info about how the data is stored, you can use the HDR book. It would also be cool to have a wikipedia article about that.
Three most common hdr imaging formats are radiance hdr (.hdr, .pic), openexr (.exr) and tif.
Radiance hdr file consists of a plain text header, that holds some information like image resolution, and pixel data in rgbe (red, green, blue and exponent) format, that may be run-lenght encoded to provide compression.
You can find it's original code in Radiance software or implement it yourself, it's relatively easy.
You can also take a look at my code, or just modify it to use in your application.
OpenExr - Extended range format - stores color values as 16-bit or 32-bit floating-point numbers.
The openexr library provides support for 16-bit number representation and for reading and writing .exr files.
It is distributed for both Windows and Linux and can be easily built.
Using Imf::RgbaInputFile and Imf::RgbaOutputFile (ImfRgbaFile.h) is the easiest way to work with exr files, I used scanline-based io.
Tiff container is also a good way to store your hdr images. First, it supports 32-bit floating point pixel data, but this takes 12 bytes per pixel, making the images very heavy. Another way is to use LogLuv encoding, that uses 24 of 32 bits per pixel and so is the cheapest of popular hdr formats.
Tiff support is provided by libtiff. Previous versions of the library have cmakelists file, so you can easily build it on Windows.
I used libtiff version distributed with opencv.
Use tiffio.h to work with tif images. If you are also using opencv you'll have to undef COMPRESSION_NONE, or it will create a syntax error in opencv code.
Most information about tiff image is stored in so-called tags, managed with TIFFGetField and TIFFSetField.
TIFFTAG_BITSPERSAMPLE can be used to check if it is a 32-bit image and TIFFTAG_PHOTOMETRIC can be used to check if it uses LogLuv (PHOTOMETRIC_LOGL for gray images or PHOTOMETRIC_LOGLUV).
TIFFReadEncodedStrip and TIFFWriteEncodedStrip are the two functions to use for io.
This was some practical information about using openexr and libtiff, nomacs code for that is here.
To get info about how the data is stored, you can use the HDR book. It would also be cool to have a wikipedia article about that.
Thursday, August 16, 2012
Evaluations and everything
I promised to write something about the mid-term evaluation, but actually I don't have much to say about it.
After my university exams were over I had much more time to work on the project, so the two weeks right before the evaluation were very productive.
I was just going to discuss the upcoming evaluation with the mentors, when I got a message from Markus, informing me, that I'm passing it.
As for the final evaluation, we had a Skype discussion this Monday and decided that the project is mostly finished, and I just have to fix some small bugs (already fixed) and to write documentation.
So, the first version of nomacs hdr capabilities documentation will be published here. It can also considered a short report about my GSoC work.
I'm also going to write a few posts concerning some parts of the code, that may be useful for someone trying to google, why something is not working properly, and a post about my impression of GSoC, including the oficial final evaluation, that will be submitted to Google.
And I'm sorry for not updating this blog often enough in case someone is reading this, so here is a cool quote from George Martin:
After my university exams were over I had much more time to work on the project, so the two weeks right before the evaluation were very productive.
I was just going to discuss the upcoming evaluation with the mentors, when I got a message from Markus, informing me, that I'm passing it.
As for the final evaluation, we had a Skype discussion this Monday and decided that the project is mostly finished, and I just have to fix some small bugs (already fixed) and to write documentation.
So, the first version of nomacs hdr capabilities documentation will be published here. It can also considered a short report about my GSoC work.
I'm also going to write a few posts concerning some parts of the code, that may be useful for someone trying to google, why something is not working properly, and a post about my impression of GSoC, including the oficial final evaluation, that will be submitted to Google.
And I'm sorry for not updating this blog often enough in case someone is reading this, so here is a cool quote from George Martin:
This is for those who complain I never blog about my work. (I do, but not often. I prefer to announce when something is finally done, rather just endless reiterations of "I am working on X, I am working on Z," and I am never going to be one of those "I wrote three pages today" writers. Sorry, that's not how I roll).
Monday, June 25, 2012
Community bonding period and coding start
After the project was accepted, there were a few gsoc-related things I had to do before the coding phase.
That's what community bonding period is for.
Thanks to the quaification task, everything was set up to start coding, and I only had to deal with svn.
I've got acquainted with all nomacs mentors (there are two of them for each project) and another gsoc student, working on nomacs. We are communicating via e-mail and Skype.
Also, I was asked to set up this blog and to provide Google some official information.
The first iteration of the coding phase is completed, now I'm working on the second one. Unfortunately, I have exam session in June, so I'm not exactly following the timeline, but I should catch up with it by the mid-term evaluation.
I think, the next post will be about mid-term evaluation and my results at this milestone.
That's what community bonding period is for.
Thanks to the quaification task, everything was set up to start coding, and I only had to deal with svn.
I've got acquainted with all nomacs mentors (there are two of them for each project) and another gsoc student, working on nomacs. We are communicating via e-mail and Skype.
Also, I was asked to set up this blog and to provide Google some official information.
The first iteration of the coding phase is completed, now I'm working on the second one. Unfortunately, I have exam session in June, so I'm not exactly following the timeline, but I should catch up with it by the mid-term evaluation.
I think, the next post will be about mid-term evaluation and my results at this milestone.
Subscribe to:
Posts (Atom)