Now I can stream my VDO via IP camera app via android phone to my laptop by using the web browser or VLC program to watch live VDO from Http. Then I would like to stream it to my program. Anyone, please suggest me how to make it?
Note: Each frame of streaming VDO will be used for camera calibration and Draw Region of Interest (ROI).
Sorry for my English as well.
↧
I would like to stream VDO from IP camera app by using Java code
↧
how can I set VideoCapture attributes with set?
I am trying to adjust the gain in a simple camera function
void imgGet(){
cv::VideoCapture cap(0);
double k=0.99;
cap.set(CAP_PROP_EXPOSURE,1);
Mat frame;
cap >> frame;
string fileName = getFName();
imwrite(fileName, frame);
}
Ive tried a variety of things, but the error I get is:> VIDIOC_S_CTRL: Invalid argument
I don't know what that means, how am I supposed to set the VideoCapture::set object?
I have seen the [videocapture documentation](http://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html#videocapture-set)
but I didnt understand it in a useful way.
I am on a Linux PC. despite the error, an image is still taken, but the image quality is unchanged.
↧
↧
VIDEOIO ERROR: V4L
I get the following error when I try to initialise a videocapture with a url:
cv::VideoCapture cap = cv::VideoCapture( "http://www.example.com/vid" );
error:
VIDEOIO ERROR: V4L: device http://www.example.com/vid: Unable to query number of channels
There is nothing wrong with the videofile or access rights.
I recently recompiled ffmpeg with extra libs, and opencv 3.2 on ubuntu xenial, with the following configs:
ffmpeg
./configure --enable-gpl --enable-version3 --enable-nonfree --enable-libmfx --enable-runtime-cpudetect --enable-gray --enable-vaapi --enable-vdpau --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libfdk-aac --enable-libtheora --enable-libvpx --enable-libwebp --enable-x11grab --cpu=native --enable-vaapi --enable-vdpau --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libwavpack --enable-libxvid --enable-libx264 --enable-libx265 --enable-openssl --enable-nvenc --enable-cuda --enable-omx --enable-libv4l2
opencv 3.2
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -DWITH_LIBV4L=ON -DWITH_CLP=ON -DWITH_OPENCL=ON -DWITH_VA_INTEL=ON -DWITH_VA=ON ..
Could it be that I compiled opencv with VAor opencl?
EDIT:
I tried recompiling opencv without the extra parameters and get the same issue:
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -DWITH_LIBV4L=ON -DWITH_CLP=ON ..
↧
CV_CAP_PROP_SETTINGS working on opencvsharp not on opencv??
Hello,
I've been using opencvsharp and this code works fine:
VideoCapture capture = new VideoCapture(0);
capture.Set(37, 1);
where 37 is CV_CAP_PROP_SETTINGS ( not defined in opencvsharps ) and correctly open my webcam configuration window.
Now trying on a cpp project with opencv
VideoCapture capture(0); // open the default camera
if (!capture.isOpened()) // check if we succeeded
return -1;
capture.set(CV_CAP_PROP_SETTINGS, 1);
open the device, but no configuration window appears.
Any hint?
↧
rgb format supported in opencv cv::VideoCapture cap () for gstreamer
Hi All,
i have a sample application that takes video from a custom imx6 embedded board with sensor giving UVVY format. i converted it to RGB565 using the available gstreamer module. But its giving me below error> cv::VideoCapture cap("mfw_v4lsrc> device=/dev/video1 ! video/x-raw-yuv,> height=480, width=640 ! mfw_ipucsc !> video/x-raw-rgb, height=480, width=640> ! appsink");>> *OpenCV Error: Unsupported format or combination of formats (Gstreamer> Opencv backend doesn't support this> codec acutally.)*> cv::VideoCapture cap("mfw_v4lsrc> device=/dev/video1 ! video/x-raw-yuv,> height=480, width=640 ! mfw_ipucsc !> video/x-raw-rgb, height=480, width=640> ! colorspace ! appsink");
then using the color-space converter from gstreamer i was able to overcome this issue. but this color-space converter is software which loads system. I would like to know which type of rgb format is supported in opencv
↧
↧
Video Playback via Microsoft Media Foundation (msmf)
Hi!
Have anyone successfully playbacked video using Media Foundation (CV_CAP_MSMF) of VideoCapture on Windows 10? My goal is to use only media foundation and remove ffmpeg dependency for video decoding.
ex:
VideoCapture cap(fileName, CV_CAP_MSMF);
By rebuilding from source of OpenCV 3.0, 3.1, 3.2 (-with_msmf), I always get cap.isOpened() to return false.
When digging into the cap_msmf.cpp source, in line 3864, my multiple video(s) can not match this particular MFVideoFormat_RGB24. It makes me wonder if there is a limitation of OpenCV's media foundation implementation.
MediaType MT = FormatReader::Read(pType.Get());
// We can capture only RGB video.
if( MT.MF_MT_SUBTYPE == MFVideoFormat_RGB24 )
I had spent couple days on investigating this issue, and hope to get some confirmation and experience shared from this forum. Thanks a lot!
-Jay
↧
cv::VideoCapture leaks memory on ethernet cams?
Hello!
I have been noticing my software memory usage grows (slowly) over time.
I used to believe it was some kind of buffer.
But after I let it running overnight, and it crashed because it was out of memory,
I had to debug.
I found out if *disable* **grab()** and replace **retrieve()** for a *still* image, memory usage doesn't grow. A still image runs my algorithm normally.
My software uses 8 **VideoCapture** instances in *RTSP* Ethernet Cameras, each one in it's thread, at around 5FPS.
- After I re-enabled grab(), but not retrieve, the software grown **80MB in 1h23m**.
- After I re-enabled grab and retrieve, the software grown **120MB in 16min**.
Question is:
- Is there a way **cv::VideoCapture** leaks any memory, for *rtsp ethernet cameras*?
- Is there any kind of *buffer* I can configure?
- Is there any kind of "clean buffer" I am not aware of?
More info:
- I sometimes get an error : **" [rtsp @ 0xXXXXXXXXXXXX] Too short data for FU-A H.264 RTP packet"**
- I am using OpenCV 3.2 on Linux 64-bits.
- I have to do 5 grabs for 1 retrieve, so my image don't get delays.
Any help will be appreciated.
↧
Grabbing the image skips current (or next) frame.
Hey there, I've recently realised that all of my output videos are twice as fast as the original, so i made the program print out the current frame and noticed that it was increasing by 2 each time (2,4,6,8...). When I removed the "cap>>currentImage;" it printed 1,2,3,4,...
I just don't understand why this is happening. Is there a mistake in the way I'm printing the frame? Any help would be appreciated.
int main (int argc, char *argv[]) {
/// CHANGE INPUT FILE HERE
Mat currentImage;
VideoCapture cap("testcut.avi");//"herman.avi"
if (!cap.isOpened()) {
cout << "Failed to open the input video" << endl;
exit(5);}
for(;;){
cap>>currentImage;
if (!cap.grab())
{
cout << "\n End of video, looping" << endl;
cap.set(CV_CAP_PROP_POS_AVI_RATIO, 0);
}
waitKey(80);
cout<<"frame number= "<
↧
python frame grabbing from ip camera and process in a different thread
Hi,
When I grab frames with `VideoCapture`, the stream slows down while grabbing, so if in the beginning it runs smoothly, after a minute it gets super slow. I believe this happens because of some sort of buffer where the `VideoCapture` stores the images, while the processing piece of code is doing its magic. (speed grabber > speed processing).
I am thinking to have a thread that is solely grabbing the frames from the camera and another that is fetching the current frame, does some processing and browse the processed image. However, passing this image through threads does not seem quite easy. Does anyone have an idea how to put me on the right way?
PS: I'm using python interface
↧
↧
how can i play video
Hi all,
I want to play four fullhd videos and i'm using a sample and grab frame in each iterate in loop and show it four times in a concatenate way in just one imshow.
i have a problem with speed. it is faster than its real.
how can i play it in exact frame rate that it has?
i am using C++. if there is a better solution in C# please let me know it. i have found some but there is error in opening file in all of them.
Thank you
↧
VideoCapture from HTC Vive camera?
Hi,
I'm trying to get frames from the HTC Vive front facing camera but I'm only getting seemingly gray frames out (though they are theoretically not empty).
My code looks like this:
int main() {
VideoCapture videoSource(0);
Size size{
(int)videoSource.get(CV_CAP_PROP_FRAME_WIDTH),
(int)videoSource.get(CV_CAP_PROP_FRAME_HEIGHT)
};
namedWindow("input", WINDOW_NORMAL);
resizeWindow("input", size.width, size.height);
Mat frameIn;
auto p1 = videoSource.get(CAP_PROP_MODE);
auto p2 = videoSource.get(CAP_PROP_FORMAT);
auto p3 = videoSource.get(CAP_PROP_FPS);
unsigned f = (unsigned)videoSource.get(CV_CAP_PROP_FOURCC);
char fourcc[] = {
(char)f,
(char)(f >> 8),
(char)(f >> 16),
(char)(f >> 24),
'\0'
};
cout << "\n\nCAPTURE DEVICE\n---------------"<< "\nmode: " << p1<< "\nformat: " << p2<< "\nfps: " << p3<< "\nFOURCC: " << string(fourcc)<< "\nsize: " << size;
// One frame, to check info:
videoSource >> frameIn;
cout << "\n\nFRAME IN (MAT):\n--------------" << "\ntype: " << frameIn.type() << "\ndepth: " << frameIn.depth() << "\nsize: " << frameIn.size();
while (!frameIn.empty()) {
imshow("input", frameIn);
char key = waitKey(10);
if (key == 27) {
cvDestroyAllWindows();
break;
}
videoSource >> frameIn;
}
return 0;
}
And the output (other than a gray window) is:
CAPTURE DEVICE
---------------
mode: 0
format: 0
fps: 30.0003
FOURCC: YUY2
size: [612 x 460]
FRAME IN (MAT):
--------------
type: 16
depth: 0
size: [612 x 460]
I'm new to OpenCV so I might be missing something obvious. But from the above, it seems like the camera is being detected. The 4cc is yuy2 though, but it still returns 0 for mode and format.
When I check `frameIn.empty()` is returns false, but as you can see, there doesn't seem to be anything in the mats.
Any help would be greatly appreciated.
↧
Simple Video Display program using Thread
I would like to display the video frame in thread. But when I run the below code, I feel that the program executes but does not display the live video frame. I felt there might be an error in processing this Waitkey(but not sure). Can anyone suggest me what I need to do in order to display the video frame in thread.
#include
#include
#include
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
std::mutex mtxCam;
using namespace std;
using namespace cv;
void task(VideoCapture *cap, Mat *frame)
{
while (true)
{
mtxCam.lock();
*cap >> *frame;
mtxCam.unlock();
namedWindow( "Demo", CV_WINDOW_AUTOSIZE );
imshow("Demo",*frame);
waitKey(1);
}
}
int main() {
Mat frame, image;
VideoCapture cap(0);
//cap.open(0);
cap>>frame;
thread t(task, &cap, &frame);
while (true) {
if (!frame.empty())
{
mtxCam.lock();
frame.copyTo(image);
mtxCam.unlock();
imshow("Image main",image);
waitKey(12);
}
}
}
↧
DirectShow camera gives black image
Hi,
I built OpenCV from source (with WITH_DSHOW enabled) a couple of days ago and am unable to get it to open my DirectShow camera (IDS uEye). I open the device with VideoCapture(0) and then start a loop reading frames. If the read was successful I write the Mat to a bmp file (for debugging) and load it into an OpenGL texture. This works great with my creative webcam, but not with my IDS uEye. Is there anything special I need to do to grab frames from a DirectShow camera such as the uEye?
I am using the C++ interface.
Thanks!
↧
↧
How to read a MP4 file with OpenCV-Python
Hi there!
I am (unfortunately) absolutely new to both Python and OpenCV, but I'd like to use OpenCV (3.2.0) - Python (2.7.13) to extract specific frames from .Mp4 files. I tried to follow the "Getting Started with Videos" tutorial to learn how to play video files, but even though I was able to VideoCapture(0) (i.e. stream the web cam), I was not able to play a video file (mp4):
I used the following code named "Untitled.py":
import numpy as np
import cv2
cap = cv2.VideoCapture('PathToVideoFile\film.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
And this is all I get:
================== RESTART: C:\Python27\Scripts\Untitled.py ==================
>>>
(and nothing else...)
I downloaded ffmpeg and copied the opencv_ffmpeg320_64.dll from opencv\build\x64\vc14\bin into C:\Python27 but it did not help...
Can you please help me with this issue?
Plese let me know if you need more information about the issue and sorry if my question is due to me missing something trivial.
Thank you very much in advance!
Adrien
↧
g_object_set: assertion 'G_IS_OBJECT (object)' failed
Hello,
I am using OpenCV (2.4.11) on Windows (Cygwin). I want to open a video file but I get this error message: > GLib-GObject-CRITICAL **:> g_object_set: assertion 'G_IS_OBJECT> (object)' failed
Editing images works great, so my installation in general should be fine.
Maybe I am missing a Codec?
My code looks like this:
VideoCapture cap(argv[1]);
if(cap.grab()){
cout<<"success"<
↧
OpenCV VideoWriter() video output is much faster than normal time.
Hi everyone, I have a issue with my video capture code where the output of the video is super fast when outputted, I searched some documents and they all suggest that the problem is due to the fps of the camera not in sync to the output video fps, therefore making the output video faster or slower than normal time (capture fps > write fps: slower, capture fps < write fps: faster).
Back to my case, I did manually set the fps of the two webcams to 10 fps and matched 10 fps when using VideoWriter() as well, I also set the waitKey() to 100ms per frame so that it matches with 10 fps, but I'm still getting super fast video output, is it due to the delay processing time of my video_multi_cam_light_detection() function as it does take sometime to process, such that individual frame have longer delay which decrease the overall preset 10 fps? (I'm suspecting this because I tested another simple VideoCapture program with out my detection function and it's only faster for like 3 seconds).
Again, Thank you all for the help, if my logic or code are wrong in any ways please feel free to indicate it as I'm really new to OpenCV.
Here is my code:
int video_light_detection() {
string raw_video_path = "./RAW_VIDEO";
string processed_video_path = "./PROCESSED_VIDEO";
string raw_image_path = "./SAMPLE_CAPTURED";
string processed_image_path = "./SAMPLE_CAPTURED";
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FPS, 10);
VideoCapture cap1(1);
cap1.set(CV_CAP_PROP_FPS, 10);
time_t current_time = time(0);
tm *time_p = localtime(¤t_time);
int year = 1900 + time_p->tm_year;
int month = 1 + time_p->tm_mon;
int day = time_p->tm_mday;
int hour = time_p->tm_hour;
int min = time_p->tm_min;
ostringstream oss;
oss << "_" << hour << "_" << min << "_" << month << "_" << day << "_" << year;
string file_suffix = oss.str();
raw_video_path += file_suffix + string(".mkv");
processed_video_path += file_suffix + string(".mkv");
raw_image_path += file_suffix + string(".jpg");
processed_image_path += file_suffix + string(".jpg");
if (!cap.isOpened() || !cap1.isOpened()) {
cerr << "Camera Open Failure" << endl;
return -1;
}
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 960);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 960);
VideoWriter video_raw;
VideoWriter video_processed;
namedWindow("Capture Window", WINDOW_NORMAL);
resizeWindow("Capture Window", 640, 1280);
namedWindow("Detection Window", WINDOW_NORMAL);
resizeWindow("Detection Window", 640, 1280);
int iterations = 0;
while (true) {
Mat frame, frame1, merged, bgr_image_filter_applied;
cap >> frame;
cap1 >> frame1;
vconcat(frame, frame1, merged);
if (iterations == 0) {
int width = merged.cols;
int height = merged.rows;
video_raw.open(raw_video_path, CV_FOURCC('M', 'J', 'P', 'G'), 10, Size(width, height), true);
video_processed.open(processed_video_path, CV_FOURCC('M', 'J', 'P', 'G'), 10, Size(width, height), true);
}
video_raw << merged;
bgr_image_filter_applied = video_multi_cam_light_detection(merged);
video_processed << bgr_image_filter_applied;
imshow("Capture Window", merged); // Show origional captured image
imshow("Detection Window", bgr_image_filter_applied); // Show processed image
iterations++;
if (waitKey(100) == 27) {
imwrite(raw_image_path, merged);
imwrite(processed_image_path, bgr_image_filter_applied);
video_raw.release();
video_processed.release();
break;
}
}
return 0;
}
↧
processing the video recorded from a vehicle moving at speeds ranging from 20-150 KM/hr
Hi Team,
I want to detect the speed of other vehicles moving in front of me in the same direction. Is it possible to detect the speed using open CV. If yes what is specific algorithms or open CV's methods and functions should I use to achieve my requirement. The program should process only automobiles and eliminate other elements on the road side such dividers, poles, tress etc.
Thanks,
Arun
↧
↧
picamera alternative in openCV(C++)
Hi,
There is a python package available for controlling Raspberry Pi camera called [picamera](https://picamera.readthedocs.io/en/release-1.13/#) which exposes all the options available in [raspistill](https://www.raspberrypi.org/documentation/raspbian/applications/camera.md).
I need to access/change the "sensor mode" (which changes the resolution) for capturing. I am using openCV (C++) for my application but there is no option to change "sensor mode" available in `cv::VideoCapture::set`. There is a field of `CV_CAP_PROP_MODE` but that is not the same. So my question is:
1. Is there a way to change the "sensor
mode" in openCV?
2. If not, then is there a C++
alternative available like
[picamera](https://picamera.readthedocs.io/en/release-1.13/#)?
3. If not, then is there a way to use this python package in my C++ program?
↧
problem in cv::capture
Hello,
I have a problem in project visual studio 2015 opencv 3.2
Same code code works without any problem in opencv 2.4
void Example_video1()
{
//const std::string videoStreamAddress = "http://@/video.cgi?.mjpg";
const std::string videoStreamAddress = "http://@/video.cgi?.mjpg";
cv::namedWindow("Example3", cv::WINDOW_AUTOSIZE);
cv::VideoCapture cap;
cap.open(string("c:\\one.mp4"));
//cout << frame .total()<< endl;
int i = 1;
Mat frame;
for (;;)
{
cap >> frame; // get a new frame from file
//cap.read(frame);
if (frame.empty())
{
cout << "Empty" << endl;
break;
}
else
cout << "Frame: " << i << endl;
i++;
imshow("Example3", frame);
if (waitKey(30) >= 0) break;
}
waitKey(0);
}
i get this exception in this line
cap >> frame; // get a new frame from file
Exception thrown at 0x00007FF8004368D8 in ConsoleApplication2.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000ABBB00EA40.
Kindly advice
↧
can't read mp4 with opencv
I've ran into a problem with my opencv installation, it is unable to open an mp4 video. My system is ubuntu 16.04, 64bit, opencv3.2 used from python 3.5.
`VideoCapture.read` returns `False` and `None`.
There are other questions with this problem, but they target different platforms or different opencv versions.
Apparently, I'm missing the proper codec.
So I ran `make uninstall` from my build directory, purged `opencv*` with apt and built from source again. This time making sure that `ffmpeg` was installed before the compilation.
Here are my steps:
- clone opencv and opencv_contrib
- `cd opencv/`
- `mkdir build`
- `cd build`
- `cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules -D BUILD_EXAMPLES=ON ..`
- `make -j 8`
- `sudo make install`
I checked the output of cmake, ffmpeg is there:
Video I/O:
-- DC1394 1.x: NO
-- DC1394 2.x: NO
-- FFMPEG: YES
-- avcodec: YES (ver 56.60.100)
-- avformat: YES (ver 56.40.101)
-- avutil: YES (ver 54.31.100)
-- swscale: YES (ver 3.1.101)
-- avresample: NO
-- GStreamer: NO
-- OpenNI: NO
-- OpenNI PrimeSensor Modules: NO
-- OpenNI2: NO
-- PvAPI: NO
-- GigEVisionSDK: NO
-- Aravis SDK: NO
-- UniCap: NO
-- UniCap ucil: NO
-- V4L/V4L2: NO/YES
-- XIMEA: NO
-- Xine: NO
-- gPhoto2: NO
But the problem persists. How can I fix this ?
↧